Test Report: Docker_Linux_crio_arm64 17877

                    
                      313e97f706b26b221c5e58ce6be0ee030a1cb1f4:2024-03-28:33789
                    
                

Test fail (2/335)

Order failed test Duration
39 TestAddons/parallel/Ingress 169.71
310 TestStartStop/group/old-k8s-version/serial/SecondStart 383.22
x
+
TestAddons/parallel/Ingress (169.71s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-564371 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-564371 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-564371 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [d9d5489f-1ded-433e-9c4a-7decdf6d55b3] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [d9d5489f-1ded-433e-9c4a-7decdf6d55b3] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.005234121s
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-564371 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-564371 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.406355519s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-564371 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-564371 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:297: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.071903701s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:299: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:303: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:306: (dbg) Run:  out/minikube-linux-arm64 -p addons-564371 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-arm64 -p addons-564371 addons disable ingress-dns --alsologtostderr -v=1: (1.460787675s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-arm64 -p addons-564371 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-arm64 -p addons-564371 addons disable ingress --alsologtostderr -v=1: (7.997627951s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-564371
helpers_test.go:235: (dbg) docker inspect addons-564371:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a6677059c57846d42132760b1a70cbdddae89e6f536efc923cb40799c79cfe1f",
	        "Created": "2024-03-28T21:12:52.322225582Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1152648,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-03-28T21:12:52.580699515Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:d0f05b8b802e4c4af20a90d686bad8329f07849a8fda1b1d1c1dc3f527691df0",
	        "ResolvConfPath": "/var/lib/docker/containers/a6677059c57846d42132760b1a70cbdddae89e6f536efc923cb40799c79cfe1f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a6677059c57846d42132760b1a70cbdddae89e6f536efc923cb40799c79cfe1f/hostname",
	        "HostsPath": "/var/lib/docker/containers/a6677059c57846d42132760b1a70cbdddae89e6f536efc923cb40799c79cfe1f/hosts",
	        "LogPath": "/var/lib/docker/containers/a6677059c57846d42132760b1a70cbdddae89e6f536efc923cb40799c79cfe1f/a6677059c57846d42132760b1a70cbdddae89e6f536efc923cb40799c79cfe1f-json.log",
	        "Name": "/addons-564371",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-564371:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-564371",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/d8a0e9614abbec7bd9ca0fe449a0edc84b48197c32ef873fc5c634f4fc96539e-init/diff:/var/lib/docker/overlay2/0b3d5a8e71016a91702d908cf9c681d5044b73b0921a0445a612c018590a7fd5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d8a0e9614abbec7bd9ca0fe449a0edc84b48197c32ef873fc5c634f4fc96539e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d8a0e9614abbec7bd9ca0fe449a0edc84b48197c32ef873fc5c634f4fc96539e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d8a0e9614abbec7bd9ca0fe449a0edc84b48197c32ef873fc5c634f4fc96539e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-564371",
	                "Source": "/var/lib/docker/volumes/addons-564371/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-564371",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-564371",
	                "name.minikube.sigs.k8s.io": "addons-564371",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b98265e62f8e36a35d0f2101a9a3ed2e92e87dcd2b5511acd5a252aa30d1f3af",
	            "SandboxKey": "/var/run/docker/netns/b98265e62f8e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34263"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34262"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34259"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34261"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34260"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-564371": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "0a325235d5fbb1087ca45758a5b5bed6e49e7a3d46223dc40bf490fa256ea1e2",
	                    "EndpointID": "eb91999d3c7d8d8f01b9bda678a1efbcd61661362636f12a08735164e48b80f2",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "addons-564371",
	                        "a6677059c578"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-564371 -n addons-564371
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-564371 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-564371 logs -n 25: (1.508106285s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|----------------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|----------------|---------------------|---------------------|
	| delete  | -p download-only-755699                                                                     | download-only-755699   | jenkins | v1.33.0-beta.0 | 28 Mar 24 21:12 UTC | 28 Mar 24 21:12 UTC |
	| delete  | -p download-only-856904                                                                     | download-only-856904   | jenkins | v1.33.0-beta.0 | 28 Mar 24 21:12 UTC | 28 Mar 24 21:12 UTC |
	| delete  | -p download-only-198145                                                                     | download-only-198145   | jenkins | v1.33.0-beta.0 | 28 Mar 24 21:12 UTC | 28 Mar 24 21:12 UTC |
	| start   | --download-only -p                                                                          | download-docker-434763 | jenkins | v1.33.0-beta.0 | 28 Mar 24 21:12 UTC |                     |
	|         | download-docker-434763                                                                      |                        |         |                |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |                |                     |                     |
	|         | --driver=docker                                                                             |                        |         |                |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |                |                     |                     |
	| delete  | -p download-docker-434763                                                                   | download-docker-434763 | jenkins | v1.33.0-beta.0 | 28 Mar 24 21:12 UTC | 28 Mar 24 21:12 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-668108   | jenkins | v1.33.0-beta.0 | 28 Mar 24 21:12 UTC |                     |
	|         | binary-mirror-668108                                                                        |                        |         |                |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |                |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |                |                     |                     |
	|         | http://127.0.0.1:45097                                                                      |                        |         |                |                     |                     |
	|         | --driver=docker                                                                             |                        |         |                |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |                |                     |                     |
	| delete  | -p binary-mirror-668108                                                                     | binary-mirror-668108   | jenkins | v1.33.0-beta.0 | 28 Mar 24 21:12 UTC | 28 Mar 24 21:12 UTC |
	| addons  | enable dashboard -p                                                                         | addons-564371          | jenkins | v1.33.0-beta.0 | 28 Mar 24 21:12 UTC |                     |
	|         | addons-564371                                                                               |                        |         |                |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-564371          | jenkins | v1.33.0-beta.0 | 28 Mar 24 21:12 UTC |                     |
	|         | addons-564371                                                                               |                        |         |                |                     |                     |
	| start   | -p addons-564371 --wait=true                                                                | addons-564371          | jenkins | v1.33.0-beta.0 | 28 Mar 24 21:12 UTC | 28 Mar 24 21:15 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |                |                     |                     |
	|         | --addons=registry                                                                           |                        |         |                |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |                |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |                |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |                |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |                |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |                |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |                |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |                |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |                |                     |                     |
	|         | --addons=yakd --driver=docker                                                               |                        |         |                |                     |                     |
	|         |  --container-runtime=crio                                                                   |                        |         |                |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |                |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |                |                     |                     |
	| ip      | addons-564371 ip                                                                            | addons-564371          | jenkins | v1.33.0-beta.0 | 28 Mar 24 21:15 UTC | 28 Mar 24 21:15 UTC |
	| addons  | addons-564371 addons disable                                                                | addons-564371          | jenkins | v1.33.0-beta.0 | 28 Mar 24 21:15 UTC | 28 Mar 24 21:15 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |                |                     |                     |
	|         | -v=1                                                                                        |                        |         |                |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-564371          | jenkins | v1.33.0-beta.0 | 28 Mar 24 21:15 UTC | 28 Mar 24 21:15 UTC |
	|         | -p addons-564371                                                                            |                        |         |                |                     |                     |
	| ssh     | addons-564371 ssh cat                                                                       | addons-564371          | jenkins | v1.33.0-beta.0 | 28 Mar 24 21:15 UTC | 28 Mar 24 21:15 UTC |
	|         | /opt/local-path-provisioner/pvc-001a98eb-32e3-4d5c-9dcb-b90328f56941_default_test-pvc/file1 |                        |         |                |                     |                     |
	| addons  | addons-564371 addons disable                                                                | addons-564371          | jenkins | v1.33.0-beta.0 | 28 Mar 24 21:15 UTC | 28 Mar 24 21:15 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |                |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-564371          | jenkins | v1.33.0-beta.0 | 28 Mar 24 21:16 UTC | 28 Mar 24 21:16 UTC |
	|         | addons-564371                                                                               |                        |         |                |                     |                     |
	| addons  | enable headlamp                                                                             | addons-564371          | jenkins | v1.33.0-beta.0 | 28 Mar 24 21:16 UTC | 28 Mar 24 21:16 UTC |
	|         | -p addons-564371                                                                            |                        |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |                |                     |                     |
	| addons  | addons-564371 addons                                                                        | addons-564371          | jenkins | v1.33.0-beta.0 | 28 Mar 24 21:16 UTC | 28 Mar 24 21:16 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |                |                     |                     |
	| addons  | addons-564371 addons                                                                        | addons-564371          | jenkins | v1.33.0-beta.0 | 28 Mar 24 21:16 UTC | 28 Mar 24 21:16 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |                |                     |                     |
	| addons  | addons-564371 addons                                                                        | addons-564371          | jenkins | v1.33.0-beta.0 | 28 Mar 24 21:16 UTC | 28 Mar 24 21:16 UTC |
	|         | disable metrics-server                                                                      |                        |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |                |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-564371          | jenkins | v1.33.0-beta.0 | 28 Mar 24 21:16 UTC | 28 Mar 24 21:16 UTC |
	|         | addons-564371                                                                               |                        |         |                |                     |                     |
	| ssh     | addons-564371 ssh curl -s                                                                   | addons-564371          | jenkins | v1.33.0-beta.0 | 28 Mar 24 21:16 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |                |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |                |                     |                     |
	| ip      | addons-564371 ip                                                                            | addons-564371          | jenkins | v1.33.0-beta.0 | 28 Mar 24 21:18 UTC | 28 Mar 24 21:18 UTC |
	| addons  | addons-564371 addons disable                                                                | addons-564371          | jenkins | v1.33.0-beta.0 | 28 Mar 24 21:18 UTC | 28 Mar 24 21:18 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |                |                     |                     |
	|         | -v=1                                                                                        |                        |         |                |                     |                     |
	| addons  | addons-564371 addons disable                                                                | addons-564371          | jenkins | v1.33.0-beta.0 | 28 Mar 24 21:18 UTC | 28 Mar 24 21:19 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |                |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/28 21:12:28
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0328 21:12:28.174574 1152207 out.go:291] Setting OutFile to fd 1 ...
	I0328 21:12:28.174751 1152207 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 21:12:28.174781 1152207 out.go:304] Setting ErrFile to fd 2...
	I0328 21:12:28.174800 1152207 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 21:12:28.175066 1152207 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17877-1145955/.minikube/bin
	I0328 21:12:28.175557 1152207 out.go:298] Setting JSON to false
	I0328 21:12:28.176511 1152207 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":17699,"bootTime":1711642650,"procs":162,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1056-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0328 21:12:28.176673 1152207 start.go:139] virtualization:  
	I0328 21:12:28.179374 1152207 out.go:177] * [addons-564371] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	I0328 21:12:28.182008 1152207 out.go:177]   - MINIKUBE_LOCATION=17877
	I0328 21:12:28.183997 1152207 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 21:12:28.182127 1152207 notify.go:220] Checking for updates...
	I0328 21:12:28.186201 1152207 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17877-1145955/kubeconfig
	I0328 21:12:28.188505 1152207 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17877-1145955/.minikube
	I0328 21:12:28.190842 1152207 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0328 21:12:28.193044 1152207 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 21:12:28.196031 1152207 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 21:12:28.214420 1152207 docker.go:122] docker version: linux-26.0.0:Docker Engine - Community
	I0328 21:12:28.214537 1152207 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0328 21:12:28.275452 1152207 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-03-28 21:12:28.264593413 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0328 21:12:28.275562 1152207 docker.go:295] overlay module found
	I0328 21:12:28.277866 1152207 out.go:177] * Using the docker driver based on user configuration
	I0328 21:12:28.279986 1152207 start.go:297] selected driver: docker
	I0328 21:12:28.280002 1152207 start.go:901] validating driver "docker" against <nil>
	I0328 21:12:28.280017 1152207 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 21:12:28.280703 1152207 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0328 21:12:28.333406 1152207 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-03-28 21:12:28.32363361 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0328 21:12:28.333582 1152207 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0328 21:12:28.333828 1152207 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 21:12:28.335852 1152207 out.go:177] * Using Docker driver with root privileges
	I0328 21:12:28.337685 1152207 cni.go:84] Creating CNI manager for ""
	I0328 21:12:28.337708 1152207 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0328 21:12:28.337717 1152207 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0328 21:12:28.337805 1152207 start.go:340] cluster config:
	{Name:addons-564371 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-564371 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 21:12:28.340617 1152207 out.go:177] * Starting "addons-564371" primary control-plane node in "addons-564371" cluster
	I0328 21:12:28.342374 1152207 cache.go:121] Beginning downloading kic base image for docker with crio
	I0328 21:12:28.344479 1152207 out.go:177] * Pulling base image v0.0.43-1711559786-18485 ...
	I0328 21:12:28.346114 1152207 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0328 21:12:28.346169 1152207 preload.go:147] Found local preload: /home/jenkins/minikube-integration/17877-1145955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-arm64.tar.lz4
	I0328 21:12:28.346181 1152207 cache.go:56] Caching tarball of preloaded images
	I0328 21:12:28.346205 1152207 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 in local docker daemon
	I0328 21:12:28.346266 1152207 preload.go:173] Found /home/jenkins/minikube-integration/17877-1145955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0328 21:12:28.346276 1152207 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0328 21:12:28.346642 1152207 profile.go:143] Saving config to /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/addons-564371/config.json ...
	I0328 21:12:28.346675 1152207 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/addons-564371/config.json: {Name:mkc7666ba054ac39b0fd57622afda89f6793271f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 21:12:28.359408 1152207 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 to local cache
	I0328 21:12:28.359518 1152207 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 in local cache directory
	I0328 21:12:28.359536 1152207 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 in local cache directory, skipping pull
	I0328 21:12:28.359541 1152207 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 exists in cache, skipping pull
	I0328 21:12:28.359549 1152207 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 as a tarball
	I0328 21:12:28.359554 1152207 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 from local cache
	I0328 21:12:44.607299 1152207 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 from cached tarball
	I0328 21:12:44.607340 1152207 cache.go:194] Successfully downloaded all kic artifacts
	I0328 21:12:44.607369 1152207 start.go:360] acquireMachinesLock for addons-564371: {Name:mk55ff033f1035e149c8d8ddee9e3b2f1ced4388 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 21:12:44.607908 1152207 start.go:364] duration metric: took 517.338µs to acquireMachinesLock for "addons-564371"
	I0328 21:12:44.607944 1152207 start.go:93] Provisioning new machine with config: &{Name:addons-564371 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-564371 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0328 21:12:44.608024 1152207 start.go:125] createHost starting for "" (driver="docker")
	I0328 21:12:44.610933 1152207 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0328 21:12:44.611159 1152207 start.go:159] libmachine.API.Create for "addons-564371" (driver="docker")
	I0328 21:12:44.611191 1152207 client.go:168] LocalClient.Create starting
	I0328 21:12:44.611312 1152207 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17877-1145955/.minikube/certs/ca.pem
	I0328 21:12:44.862547 1152207 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17877-1145955/.minikube/certs/cert.pem
	I0328 21:12:45.932160 1152207 cli_runner.go:164] Run: docker network inspect addons-564371 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0328 21:12:45.945548 1152207 cli_runner.go:211] docker network inspect addons-564371 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0328 21:12:45.945633 1152207 network_create.go:281] running [docker network inspect addons-564371] to gather additional debugging logs...
	I0328 21:12:45.945654 1152207 cli_runner.go:164] Run: docker network inspect addons-564371
	W0328 21:12:45.958469 1152207 cli_runner.go:211] docker network inspect addons-564371 returned with exit code 1
	I0328 21:12:45.958501 1152207 network_create.go:284] error running [docker network inspect addons-564371]: docker network inspect addons-564371: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-564371 not found
	I0328 21:12:45.958514 1152207 network_create.go:286] output of [docker network inspect addons-564371]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-564371 not found
	
	** /stderr **
	I0328 21:12:45.958615 1152207 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0328 21:12:45.971594 1152207 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4002850760}
	I0328 21:12:45.971636 1152207 network_create.go:124] attempt to create docker network addons-564371 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0328 21:12:45.971696 1152207 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-564371 addons-564371
	I0328 21:12:46.040485 1152207 network_create.go:108] docker network addons-564371 192.168.49.0/24 created
	I0328 21:12:46.040522 1152207 kic.go:121] calculated static IP "192.168.49.2" for the "addons-564371" container
	I0328 21:12:46.040615 1152207 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0328 21:12:46.055237 1152207 cli_runner.go:164] Run: docker volume create addons-564371 --label name.minikube.sigs.k8s.io=addons-564371 --label created_by.minikube.sigs.k8s.io=true
	I0328 21:12:46.069869 1152207 oci.go:103] Successfully created a docker volume addons-564371
	I0328 21:12:46.069980 1152207 cli_runner.go:164] Run: docker run --rm --name addons-564371-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-564371 --entrypoint /usr/bin/test -v addons-564371:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 -d /var/lib
	I0328 21:12:48.043450 1152207 cli_runner.go:217] Completed: docker run --rm --name addons-564371-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-564371 --entrypoint /usr/bin/test -v addons-564371:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 -d /var/lib: (1.973419033s)
	I0328 21:12:48.043478 1152207 oci.go:107] Successfully prepared a docker volume addons-564371
	I0328 21:12:48.043516 1152207 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0328 21:12:48.043538 1152207 kic.go:194] Starting extracting preloaded images to volume ...
	I0328 21:12:48.043610 1152207 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17877-1145955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-564371:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 -I lz4 -xf /preloaded.tar -C /extractDir
	I0328 21:12:52.258834 1152207 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17877-1145955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-564371:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 -I lz4 -xf /preloaded.tar -C /extractDir: (4.215170437s)
	I0328 21:12:52.258869 1152207 kic.go:203] duration metric: took 4.215327088s to extract preloaded images to volume ...
	W0328 21:12:52.259006 1152207 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0328 21:12:52.259128 1152207 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0328 21:12:52.310222 1152207 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-564371 --name addons-564371 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-564371 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-564371 --network addons-564371 --ip 192.168.49.2 --volume addons-564371:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82
	I0328 21:12:52.589582 1152207 cli_runner.go:164] Run: docker container inspect addons-564371 --format={{.State.Running}}
	I0328 21:12:52.605660 1152207 cli_runner.go:164] Run: docker container inspect addons-564371 --format={{.State.Status}}
	I0328 21:12:52.624956 1152207 cli_runner.go:164] Run: docker exec addons-564371 stat /var/lib/dpkg/alternatives/iptables
	I0328 21:12:52.686359 1152207 oci.go:144] the created container "addons-564371" has a running status.
	I0328 21:12:52.686385 1152207 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17877-1145955/.minikube/machines/addons-564371/id_rsa...
	I0328 21:12:52.996257 1152207 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17877-1145955/.minikube/machines/addons-564371/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0328 21:12:53.020238 1152207 cli_runner.go:164] Run: docker container inspect addons-564371 --format={{.State.Status}}
	I0328 21:12:53.044479 1152207 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0328 21:12:53.044498 1152207 kic_runner.go:114] Args: [docker exec --privileged addons-564371 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0328 21:12:53.102107 1152207 cli_runner.go:164] Run: docker container inspect addons-564371 --format={{.State.Status}}
	I0328 21:12:53.123983 1152207 machine.go:94] provisionDockerMachine start ...
	I0328 21:12:53.124068 1152207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-564371
	I0328 21:12:53.152421 1152207 main.go:141] libmachine: Using SSH client type: native
	I0328 21:12:53.152692 1152207 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 34263 <nil> <nil>}
	I0328 21:12:53.152701 1152207 main.go:141] libmachine: About to run SSH command:
	hostname
	I0328 21:12:53.153432 1152207 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36840->127.0.0.1:34263: read: connection reset by peer
	I0328 21:12:56.291384 1152207 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-564371
	
	I0328 21:12:56.291406 1152207 ubuntu.go:169] provisioning hostname "addons-564371"
	I0328 21:12:56.291470 1152207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-564371
	I0328 21:12:56.306971 1152207 main.go:141] libmachine: Using SSH client type: native
	I0328 21:12:56.307222 1152207 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 34263 <nil> <nil>}
	I0328 21:12:56.307238 1152207 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-564371 && echo "addons-564371" | sudo tee /etc/hostname
	I0328 21:12:56.455401 1152207 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-564371
	
	I0328 21:12:56.455523 1152207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-564371
	I0328 21:12:56.471370 1152207 main.go:141] libmachine: Using SSH client type: native
	I0328 21:12:56.471627 1152207 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 34263 <nil> <nil>}
	I0328 21:12:56.471650 1152207 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-564371' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-564371/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-564371' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0328 21:12:56.608008 1152207 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 21:12:56.608032 1152207 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17877-1145955/.minikube CaCertPath:/home/jenkins/minikube-integration/17877-1145955/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17877-1145955/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17877-1145955/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17877-1145955/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17877-1145955/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17877-1145955/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17877-1145955/.minikube}
	I0328 21:12:56.608051 1152207 ubuntu.go:177] setting up certificates
	I0328 21:12:56.608061 1152207 provision.go:84] configureAuth start
	I0328 21:12:56.608149 1152207 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-564371
	I0328 21:12:56.622382 1152207 provision.go:143] copyHostCerts
	I0328 21:12:56.622477 1152207 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17877-1145955/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17877-1145955/.minikube/ca.pem (1082 bytes)
	I0328 21:12:56.622590 1152207 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17877-1145955/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17877-1145955/.minikube/cert.pem (1123 bytes)
	I0328 21:12:56.622641 1152207 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17877-1145955/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17877-1145955/.minikube/key.pem (1679 bytes)
	I0328 21:12:56.622687 1152207 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17877-1145955/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17877-1145955/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17877-1145955/.minikube/certs/ca-key.pem org=jenkins.addons-564371 san=[127.0.0.1 192.168.49.2 addons-564371 localhost minikube]
	I0328 21:12:57.016438 1152207 provision.go:177] copyRemoteCerts
	I0328 21:12:57.016522 1152207 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0328 21:12:57.016597 1152207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-564371
	I0328 21:12:57.035267 1152207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34263 SSHKeyPath:/home/jenkins/minikube-integration/17877-1145955/.minikube/machines/addons-564371/id_rsa Username:docker}
	I0328 21:12:57.133376 1152207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17877-1145955/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0328 21:12:57.159102 1152207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17877-1145955/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0328 21:12:57.183078 1152207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17877-1145955/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0328 21:12:57.206617 1152207 provision.go:87] duration metric: took 598.542657ms to configureAuth
	I0328 21:12:57.206685 1152207 ubuntu.go:193] setting minikube options for container-runtime
	I0328 21:12:57.206895 1152207 config.go:182] Loaded profile config "addons-564371": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 21:12:57.207002 1152207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-564371
	I0328 21:12:57.222076 1152207 main.go:141] libmachine: Using SSH client type: native
	I0328 21:12:57.222339 1152207 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 34263 <nil> <nil>}
	I0328 21:12:57.222361 1152207 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0328 21:12:57.454638 1152207 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0328 21:12:57.454663 1152207 machine.go:97] duration metric: took 4.330660927s to provisionDockerMachine
	I0328 21:12:57.454674 1152207 client.go:171] duration metric: took 12.843476636s to LocalClient.Create
	I0328 21:12:57.454693 1152207 start.go:167] duration metric: took 12.843534728s to libmachine.API.Create "addons-564371"
	I0328 21:12:57.454701 1152207 start.go:293] postStartSetup for "addons-564371" (driver="docker")
	I0328 21:12:57.454712 1152207 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0328 21:12:57.454793 1152207 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0328 21:12:57.454837 1152207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-564371
	I0328 21:12:57.470218 1152207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34263 SSHKeyPath:/home/jenkins/minikube-integration/17877-1145955/.minikube/machines/addons-564371/id_rsa Username:docker}
	I0328 21:12:57.568986 1152207 ssh_runner.go:195] Run: cat /etc/os-release
	I0328 21:12:57.571914 1152207 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0328 21:12:57.571949 1152207 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0328 21:12:57.571960 1152207 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0328 21:12:57.571968 1152207 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0328 21:12:57.571982 1152207 filesync.go:126] Scanning /home/jenkins/minikube-integration/17877-1145955/.minikube/addons for local assets ...
	I0328 21:12:57.572047 1152207 filesync.go:126] Scanning /home/jenkins/minikube-integration/17877-1145955/.minikube/files for local assets ...
	I0328 21:12:57.572077 1152207 start.go:296] duration metric: took 117.37041ms for postStartSetup
	I0328 21:12:57.572433 1152207 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-564371
	I0328 21:12:57.586590 1152207 profile.go:143] Saving config to /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/addons-564371/config.json ...
	I0328 21:12:57.586875 1152207 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0328 21:12:57.586931 1152207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-564371
	I0328 21:12:57.601502 1152207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34263 SSHKeyPath:/home/jenkins/minikube-integration/17877-1145955/.minikube/machines/addons-564371/id_rsa Username:docker}
	I0328 21:12:57.692862 1152207 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0328 21:12:57.697167 1152207 start.go:128] duration metric: took 13.08912424s to createHost
	I0328 21:12:57.697193 1152207 start.go:83] releasing machines lock for "addons-564371", held for 13.089269109s
	I0328 21:12:57.697264 1152207 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-564371
	I0328 21:12:57.711280 1152207 ssh_runner.go:195] Run: cat /version.json
	I0328 21:12:57.711345 1152207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-564371
	I0328 21:12:57.711621 1152207 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0328 21:12:57.711676 1152207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-564371
	I0328 21:12:57.735612 1152207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34263 SSHKeyPath:/home/jenkins/minikube-integration/17877-1145955/.minikube/machines/addons-564371/id_rsa Username:docker}
	I0328 21:12:57.738699 1152207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34263 SSHKeyPath:/home/jenkins/minikube-integration/17877-1145955/.minikube/machines/addons-564371/id_rsa Username:docker}
	I0328 21:12:57.827511 1152207 ssh_runner.go:195] Run: systemctl --version
	I0328 21:12:57.937533 1152207 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0328 21:12:58.082618 1152207 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0328 21:12:58.087003 1152207 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0328 21:12:58.109301 1152207 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0328 21:12:58.109408 1152207 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0328 21:12:58.141116 1152207 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0328 21:12:58.141145 1152207 start.go:494] detecting cgroup driver to use...
	I0328 21:12:58.141218 1152207 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0328 21:12:58.141310 1152207 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0328 21:12:58.158633 1152207 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0328 21:12:58.169969 1152207 docker.go:217] disabling cri-docker service (if available) ...
	I0328 21:12:58.170052 1152207 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0328 21:12:58.184878 1152207 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0328 21:12:58.200219 1152207 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0328 21:12:58.283743 1152207 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0328 21:12:58.380465 1152207 docker.go:233] disabling docker service ...
	I0328 21:12:58.380583 1152207 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0328 21:12:58.401425 1152207 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0328 21:12:58.413559 1152207 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0328 21:12:58.504548 1152207 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0328 21:12:58.593325 1152207 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0328 21:12:58.605693 1152207 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 21:12:58.622148 1152207 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0328 21:12:58.622249 1152207 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 21:12:58.632842 1152207 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0328 21:12:58.632913 1152207 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 21:12:58.643390 1152207 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 21:12:58.654226 1152207 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 21:12:58.665038 1152207 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0328 21:12:58.674866 1152207 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 21:12:58.684867 1152207 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 21:12:58.701077 1152207 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 21:12:58.710811 1152207 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0328 21:12:58.719721 1152207 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0328 21:12:58.728201 1152207 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 21:12:58.806674 1152207 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0328 21:12:58.930553 1152207 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0328 21:12:58.930637 1152207 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0328 21:12:58.934354 1152207 start.go:562] Will wait 60s for crictl version
	I0328 21:12:58.934449 1152207 ssh_runner.go:195] Run: which crictl
	I0328 21:12:58.938434 1152207 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0328 21:12:58.977292 1152207 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0328 21:12:58.977412 1152207 ssh_runner.go:195] Run: crio --version
	I0328 21:12:59.016222 1152207 ssh_runner.go:195] Run: crio --version
	I0328 21:12:59.054579 1152207 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.24.6 ...
	I0328 21:12:59.056606 1152207 cli_runner.go:164] Run: docker network inspect addons-564371 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0328 21:12:59.071416 1152207 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0328 21:12:59.075068 1152207 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 21:12:59.086019 1152207 kubeadm.go:877] updating cluster {Name:addons-564371 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-564371 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0328 21:12:59.086147 1152207 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0328 21:12:59.086217 1152207 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 21:12:59.157547 1152207 crio.go:514] all images are preloaded for cri-o runtime.
	I0328 21:12:59.157574 1152207 crio.go:433] Images already preloaded, skipping extraction
	I0328 21:12:59.157634 1152207 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 21:12:59.193306 1152207 crio.go:514] all images are preloaded for cri-o runtime.
	I0328 21:12:59.193333 1152207 cache_images.go:84] Images are preloaded, skipping loading
	I0328 21:12:59.193341 1152207 kubeadm.go:928] updating node { 192.168.49.2 8443 v1.29.3 crio true true} ...
	I0328 21:12:59.193436 1152207 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-564371 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:addons-564371 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0328 21:12:59.193520 1152207 ssh_runner.go:195] Run: crio config
	I0328 21:12:59.244179 1152207 cni.go:84] Creating CNI manager for ""
	I0328 21:12:59.244244 1152207 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0328 21:12:59.244282 1152207 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0328 21:12:59.244339 1152207 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-564371 NodeName:addons-564371 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0328 21:12:59.244534 1152207 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-564371"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0328 21:12:59.244653 1152207 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0328 21:12:59.253597 1152207 binaries.go:44] Found k8s binaries, skipping transfer
	I0328 21:12:59.253671 1152207 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0328 21:12:59.262548 1152207 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0328 21:12:59.281212 1152207 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0328 21:12:59.299694 1152207 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0328 21:12:59.317936 1152207 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0328 21:12:59.321461 1152207 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 21:12:59.332010 1152207 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 21:12:59.410338 1152207 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 21:12:59.424200 1152207 certs.go:68] Setting up /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/addons-564371 for IP: 192.168.49.2
	I0328 21:12:59.424267 1152207 certs.go:194] generating shared ca certs ...
	I0328 21:12:59.424319 1152207 certs.go:226] acquiring lock for ca certs: {Name:mk1e4b3d6020f96643d0b806687ddcafb6824b0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 21:12:59.424850 1152207 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/17877-1145955/.minikube/ca.key
	I0328 21:12:59.730335 1152207 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17877-1145955/.minikube/ca.crt ...
	I0328 21:12:59.730374 1152207 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17877-1145955/.minikube/ca.crt: {Name:mkabed5fd99691e9a5d4a7ff21f2db88a7d93eb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 21:12:59.730613 1152207 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17877-1145955/.minikube/ca.key ...
	I0328 21:12:59.730629 1152207 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17877-1145955/.minikube/ca.key: {Name:mk76cbb01323e9838d9f63c3ac7812f7971b3638 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 21:12:59.730723 1152207 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17877-1145955/.minikube/proxy-client-ca.key
	I0328 21:13:00.186743 1152207 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17877-1145955/.minikube/proxy-client-ca.crt ...
	I0328 21:13:00.186831 1152207 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17877-1145955/.minikube/proxy-client-ca.crt: {Name:mk4c0367b797ce3f05983ae892e4053996b72c72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 21:13:00.187259 1152207 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17877-1145955/.minikube/proxy-client-ca.key ...
	I0328 21:13:00.187313 1152207 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17877-1145955/.minikube/proxy-client-ca.key: {Name:mk0a6606ca329c4bccac51c09d7f784936266a6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 21:13:00.188157 1152207 certs.go:256] generating profile certs ...
	I0328 21:13:00.188321 1152207 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/addons-564371/client.key
	I0328 21:13:00.188366 1152207 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/addons-564371/client.crt with IP's: []
	I0328 21:13:00.842781 1152207 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/addons-564371/client.crt ...
	I0328 21:13:00.842822 1152207 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/addons-564371/client.crt: {Name:mk23e686b5a9e539f5a7ee1ecd455bf2a26b69b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 21:13:00.843012 1152207 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/addons-564371/client.key ...
	I0328 21:13:00.843026 1152207 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/addons-564371/client.key: {Name:mk639ad590c091d111df0a69f4ee17ee97adeb01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 21:13:00.844063 1152207 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/addons-564371/apiserver.key.4805c1b8
	I0328 21:13:00.844113 1152207 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/addons-564371/apiserver.crt.4805c1b8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0328 21:13:01.048047 1152207 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/addons-564371/apiserver.crt.4805c1b8 ...
	I0328 21:13:01.048078 1152207 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/addons-564371/apiserver.crt.4805c1b8: {Name:mk734eda47ad5fbc234b5beda8dd4b0b887c7ed0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 21:13:01.048260 1152207 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/addons-564371/apiserver.key.4805c1b8 ...
	I0328 21:13:01.048279 1152207 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/addons-564371/apiserver.key.4805c1b8: {Name:mk7d7d360a76d2ab320ef04f563a33ce3ae314f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 21:13:01.048363 1152207 certs.go:381] copying /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/addons-564371/apiserver.crt.4805c1b8 -> /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/addons-564371/apiserver.crt
	I0328 21:13:01.048450 1152207 certs.go:385] copying /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/addons-564371/apiserver.key.4805c1b8 -> /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/addons-564371/apiserver.key
	I0328 21:13:01.048513 1152207 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/addons-564371/proxy-client.key
	I0328 21:13:01.048538 1152207 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/addons-564371/proxy-client.crt with IP's: []
	I0328 21:13:01.720468 1152207 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/addons-564371/proxy-client.crt ...
	I0328 21:13:01.720500 1152207 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/addons-564371/proxy-client.crt: {Name:mkd6167badd231cf2e1e2e8470d139bddbfbd1b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 21:13:01.720684 1152207 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/addons-564371/proxy-client.key ...
	I0328 21:13:01.720698 1152207 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/addons-564371/proxy-client.key: {Name:mk3159272c7f96e6a278fb4a8cf4fe7fbbee74c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 21:13:01.720894 1152207 certs.go:484] found cert: /home/jenkins/minikube-integration/17877-1145955/.minikube/certs/ca-key.pem (1679 bytes)
	I0328 21:13:01.720942 1152207 certs.go:484] found cert: /home/jenkins/minikube-integration/17877-1145955/.minikube/certs/ca.pem (1082 bytes)
	I0328 21:13:01.720974 1152207 certs.go:484] found cert: /home/jenkins/minikube-integration/17877-1145955/.minikube/certs/cert.pem (1123 bytes)
	I0328 21:13:01.721006 1152207 certs.go:484] found cert: /home/jenkins/minikube-integration/17877-1145955/.minikube/certs/key.pem (1679 bytes)
	I0328 21:13:01.721593 1152207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17877-1145955/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0328 21:13:01.746046 1152207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17877-1145955/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0328 21:13:01.773528 1152207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17877-1145955/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0328 21:13:01.797529 1152207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17877-1145955/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0328 21:13:01.824941 1152207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/addons-564371/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0328 21:13:01.848654 1152207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/addons-564371/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0328 21:13:01.872992 1152207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/addons-564371/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0328 21:13:01.896814 1152207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/addons-564371/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0328 21:13:01.920018 1152207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17877-1145955/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0328 21:13:01.943022 1152207 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0328 21:13:01.960226 1152207 ssh_runner.go:195] Run: openssl version
	I0328 21:13:01.965416 1152207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0328 21:13:01.974439 1152207 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0328 21:13:01.977662 1152207 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 28 21:12 /usr/share/ca-certificates/minikubeCA.pem
	I0328 21:13:01.977750 1152207 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0328 21:13:01.985226 1152207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0328 21:13:01.994667 1152207 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0328 21:13:01.998149 1152207 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0328 21:13:01.998201 1152207 kubeadm.go:391] StartCluster: {Name:addons-564371 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-564371 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 21:13:01.998287 1152207 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0328 21:13:01.998352 1152207 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 21:13:02.042324 1152207 cri.go:89] found id: ""
	I0328 21:13:02.042403 1152207 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0328 21:13:02.051747 1152207 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 21:13:02.060941 1152207 kubeadm.go:213] ignoring SystemVerification for kubeadm because of docker driver
	I0328 21:13:02.061009 1152207 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 21:13:02.070494 1152207 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 21:13:02.070517 1152207 kubeadm.go:156] found existing configuration files:
	
	I0328 21:13:02.070570 1152207 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 21:13:02.079487 1152207 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 21:13:02.079647 1152207 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 21:13:02.088488 1152207 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 21:13:02.097476 1152207 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 21:13:02.097571 1152207 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 21:13:02.106187 1152207 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 21:13:02.114906 1152207 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 21:13:02.114995 1152207 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 21:13:02.123430 1152207 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 21:13:02.132191 1152207 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 21:13:02.132281 1152207 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 21:13:02.140889 1152207 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0328 21:13:02.187345 1152207 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0328 21:13:02.187685 1152207 kubeadm.go:309] [preflight] Running pre-flight checks
	I0328 21:13:02.228323 1152207 kubeadm.go:309] [preflight] The system verification failed. Printing the output from the verification:
	I0328 21:13:02.228462 1152207 kubeadm.go:309] KERNEL_VERSION: 5.15.0-1056-aws
	I0328 21:13:02.228524 1152207 kubeadm.go:309] OS: Linux
	I0328 21:13:02.228596 1152207 kubeadm.go:309] CGROUPS_CPU: enabled
	I0328 21:13:02.228676 1152207 kubeadm.go:309] CGROUPS_CPUACCT: enabled
	I0328 21:13:02.228749 1152207 kubeadm.go:309] CGROUPS_CPUSET: enabled
	I0328 21:13:02.228829 1152207 kubeadm.go:309] CGROUPS_DEVICES: enabled
	I0328 21:13:02.228905 1152207 kubeadm.go:309] CGROUPS_FREEZER: enabled
	I0328 21:13:02.228985 1152207 kubeadm.go:309] CGROUPS_MEMORY: enabled
	I0328 21:13:02.229057 1152207 kubeadm.go:309] CGROUPS_PIDS: enabled
	I0328 21:13:02.229136 1152207 kubeadm.go:309] CGROUPS_HUGETLB: enabled
	I0328 21:13:02.229208 1152207 kubeadm.go:309] CGROUPS_BLKIO: enabled
	I0328 21:13:02.294819 1152207 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0328 21:13:02.294996 1152207 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0328 21:13:02.295135 1152207 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0328 21:13:02.528278 1152207 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0328 21:13:02.533575 1152207 out.go:204]   - Generating certificates and keys ...
	I0328 21:13:02.533660 1152207 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0328 21:13:02.533730 1152207 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0328 21:13:02.834644 1152207 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0328 21:13:03.089020 1152207 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0328 21:13:03.288164 1152207 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0328 21:13:03.869488 1152207 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0328 21:13:04.112058 1152207 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0328 21:13:04.112403 1152207 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-564371 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0328 21:13:04.629921 1152207 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0328 21:13:04.630236 1152207 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-564371 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0328 21:13:05.129332 1152207 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0328 21:13:05.781972 1152207 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0328 21:13:06.538106 1152207 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0328 21:13:06.538409 1152207 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0328 21:13:07.068872 1152207 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0328 21:13:07.545224 1152207 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0328 21:13:08.542561 1152207 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0328 21:13:09.147621 1152207 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0328 21:13:09.633840 1152207 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0328 21:13:09.634618 1152207 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0328 21:13:09.637644 1152207 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0328 21:13:09.640946 1152207 out.go:204]   - Booting up control plane ...
	I0328 21:13:09.641051 1152207 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0328 21:13:09.641132 1152207 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0328 21:13:09.641201 1152207 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0328 21:13:09.651402 1152207 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0328 21:13:09.652388 1152207 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0328 21:13:09.652631 1152207 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0328 21:13:09.739489 1152207 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0328 21:13:17.242262 1152207 kubeadm.go:309] [apiclient] All control plane components are healthy after 7.502878 seconds
	I0328 21:13:17.264499 1152207 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0328 21:13:17.278746 1152207 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0328 21:13:17.803783 1152207 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0328 21:13:17.803982 1152207 kubeadm.go:309] [mark-control-plane] Marking the node addons-564371 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0328 21:13:18.315701 1152207 kubeadm.go:309] [bootstrap-token] Using token: 20wyzy.ouzrgooygfamyxdk
	I0328 21:13:18.317806 1152207 out.go:204]   - Configuring RBAC rules ...
	I0328 21:13:18.317939 1152207 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0328 21:13:18.323857 1152207 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0328 21:13:18.334781 1152207 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0328 21:13:18.338905 1152207 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0328 21:13:18.343032 1152207 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0328 21:13:18.348020 1152207 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0328 21:13:18.361830 1152207 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0328 21:13:18.600528 1152207 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0328 21:13:18.736400 1152207 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0328 21:13:18.737460 1152207 kubeadm.go:309] 
	I0328 21:13:18.737529 1152207 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0328 21:13:18.737535 1152207 kubeadm.go:309] 
	I0328 21:13:18.737613 1152207 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0328 21:13:18.737618 1152207 kubeadm.go:309] 
	I0328 21:13:18.737643 1152207 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0328 21:13:18.737700 1152207 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0328 21:13:18.737749 1152207 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0328 21:13:18.737754 1152207 kubeadm.go:309] 
	I0328 21:13:18.737806 1152207 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0328 21:13:18.737810 1152207 kubeadm.go:309] 
	I0328 21:13:18.737857 1152207 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0328 21:13:18.737862 1152207 kubeadm.go:309] 
	I0328 21:13:18.737912 1152207 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0328 21:13:18.737987 1152207 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0328 21:13:18.738054 1152207 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0328 21:13:18.738058 1152207 kubeadm.go:309] 
	I0328 21:13:18.738139 1152207 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0328 21:13:18.738214 1152207 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0328 21:13:18.738219 1152207 kubeadm.go:309] 
	I0328 21:13:18.738312 1152207 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 20wyzy.ouzrgooygfamyxdk \
	I0328 21:13:18.738414 1152207 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:08fab7d4d6582c39c9acf943b499ad6adf1f89ccdb759a7cfd2a5d62d17cb45b \
	I0328 21:13:18.738434 1152207 kubeadm.go:309] 	--control-plane 
	I0328 21:13:18.738439 1152207 kubeadm.go:309] 
	I0328 21:13:18.738521 1152207 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0328 21:13:18.738527 1152207 kubeadm.go:309] 
	I0328 21:13:18.738862 1152207 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 20wyzy.ouzrgooygfamyxdk \
	I0328 21:13:18.739013 1152207 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:08fab7d4d6582c39c9acf943b499ad6adf1f89ccdb759a7cfd2a5d62d17cb45b 
	I0328 21:13:18.741088 1152207 kubeadm.go:309] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1056-aws\n", err: exit status 1
	I0328 21:13:18.741196 1152207 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0328 21:13:18.741213 1152207 cni.go:84] Creating CNI manager for ""
	I0328 21:13:18.741220 1152207 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0328 21:13:18.744155 1152207 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0328 21:13:18.746443 1152207 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0328 21:13:18.753203 1152207 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.29.3/kubectl ...
	I0328 21:13:18.753223 1152207 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0328 21:13:18.804568 1152207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0328 21:13:19.116702 1152207 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0328 21:13:19.116834 1152207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 21:13:19.116876 1152207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-564371 minikube.k8s.io/updated_at=2024_03_28T21_13_19_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=2883ffbf70a3cdb38617e0fd1a9bb421b3d79967 minikube.k8s.io/name=addons-564371 minikube.k8s.io/primary=true
	I0328 21:13:19.234814 1152207 ops.go:34] apiserver oom_adj: -16
	I0328 21:13:19.234899 1152207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 21:13:19.735730 1152207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 21:13:20.235471 1152207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 21:13:20.735049 1152207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 21:13:21.235412 1152207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 21:13:21.735060 1152207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 21:13:22.235200 1152207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 21:13:22.735642 1152207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 21:13:23.236022 1152207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 21:13:23.735953 1152207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 21:13:24.235456 1152207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 21:13:24.735600 1152207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 21:13:25.235154 1152207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 21:13:25.735733 1152207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 21:13:26.235660 1152207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 21:13:26.735705 1152207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 21:13:27.235786 1152207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 21:13:27.735726 1152207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 21:13:28.235607 1152207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 21:13:28.735819 1152207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 21:13:29.235829 1152207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 21:13:29.735428 1152207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 21:13:30.235086 1152207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 21:13:30.735674 1152207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 21:13:31.235892 1152207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 21:13:31.735510 1152207 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 21:13:31.824649 1152207 kubeadm.go:1107] duration metric: took 12.707871127s to wait for elevateKubeSystemPrivileges
	W0328 21:13:31.824686 1152207 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0328 21:13:31.824693 1152207 kubeadm.go:393] duration metric: took 29.82649784s to StartCluster
	I0328 21:13:31.824710 1152207 settings.go:142] acquiring lock: {Name:mka22e5d6cd66b2677ac3cce373c1a6e13c189c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 21:13:31.825244 1152207 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17877-1145955/kubeconfig
	I0328 21:13:31.825663 1152207 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17877-1145955/kubeconfig: {Name:mk01de9100d65131f49674a0d1051891ca674cd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 21:13:31.826350 1152207 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0328 21:13:31.826367 1152207 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0328 21:13:31.828778 1152207 out.go:177] * Verifying Kubernetes components...
	I0328 21:13:31.826639 1152207 config.go:182] Loaded profile config "addons-564371": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 21:13:31.826649 1152207 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0328 21:13:31.830821 1152207 addons.go:69] Setting yakd=true in profile "addons-564371"
	I0328 21:13:31.830849 1152207 addons.go:234] Setting addon yakd=true in "addons-564371"
	I0328 21:13:31.830882 1152207 host.go:66] Checking if "addons-564371" exists ...
	I0328 21:13:31.831418 1152207 cli_runner.go:164] Run: docker container inspect addons-564371 --format={{.State.Status}}
	I0328 21:13:31.831604 1152207 addons.go:69] Setting ingress-dns=true in profile "addons-564371"
	I0328 21:13:31.831626 1152207 addons.go:234] Setting addon ingress-dns=true in "addons-564371"
	I0328 21:13:31.831666 1152207 host.go:66] Checking if "addons-564371" exists ...
	I0328 21:13:31.832028 1152207 cli_runner.go:164] Run: docker container inspect addons-564371 --format={{.State.Status}}
	I0328 21:13:31.832448 1152207 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 21:13:31.832567 1152207 addons.go:69] Setting cloud-spanner=true in profile "addons-564371"
	I0328 21:13:31.832589 1152207 addons.go:234] Setting addon cloud-spanner=true in "addons-564371"
	I0328 21:13:31.832610 1152207 host.go:66] Checking if "addons-564371" exists ...
	I0328 21:13:31.832961 1152207 cli_runner.go:164] Run: docker container inspect addons-564371 --format={{.State.Status}}
	I0328 21:13:31.833234 1152207 addons.go:69] Setting inspektor-gadget=true in profile "addons-564371"
	I0328 21:13:31.833264 1152207 addons.go:234] Setting addon inspektor-gadget=true in "addons-564371"
	I0328 21:13:31.833297 1152207 host.go:66] Checking if "addons-564371" exists ...
	I0328 21:13:31.833673 1152207 cli_runner.go:164] Run: docker container inspect addons-564371 --format={{.State.Status}}
	I0328 21:13:31.835785 1152207 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-564371"
	I0328 21:13:31.835851 1152207 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-564371"
	I0328 21:13:31.835879 1152207 host.go:66] Checking if "addons-564371" exists ...
	I0328 21:13:31.836359 1152207 cli_runner.go:164] Run: docker container inspect addons-564371 --format={{.State.Status}}
	I0328 21:13:31.844663 1152207 addons.go:69] Setting default-storageclass=true in profile "addons-564371"
	I0328 21:13:31.844717 1152207 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-564371"
	I0328 21:13:31.845140 1152207 cli_runner.go:164] Run: docker container inspect addons-564371 --format={{.State.Status}}
	I0328 21:13:31.846338 1152207 addons.go:69] Setting metrics-server=true in profile "addons-564371"
	I0328 21:13:31.846415 1152207 addons.go:234] Setting addon metrics-server=true in "addons-564371"
	I0328 21:13:31.846481 1152207 host.go:66] Checking if "addons-564371" exists ...
	I0328 21:13:31.847062 1152207 cli_runner.go:164] Run: docker container inspect addons-564371 --format={{.State.Status}}
	I0328 21:13:31.860158 1152207 addons.go:69] Setting gcp-auth=true in profile "addons-564371"
	I0328 21:13:31.860214 1152207 mustload.go:65] Loading cluster: addons-564371
	I0328 21:13:31.860405 1152207 config.go:182] Loaded profile config "addons-564371": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 21:13:31.860647 1152207 cli_runner.go:164] Run: docker container inspect addons-564371 --format={{.State.Status}}
	I0328 21:13:31.876393 1152207 addons.go:69] Setting ingress=true in profile "addons-564371"
	I0328 21:13:31.876437 1152207 addons.go:234] Setting addon ingress=true in "addons-564371"
	I0328 21:13:31.876482 1152207 host.go:66] Checking if "addons-564371" exists ...
	I0328 21:13:31.876966 1152207 cli_runner.go:164] Run: docker container inspect addons-564371 --format={{.State.Status}}
	I0328 21:13:31.884438 1152207 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-564371"
	I0328 21:13:31.884485 1152207 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-564371"
	I0328 21:13:31.884522 1152207 host.go:66] Checking if "addons-564371" exists ...
	I0328 21:13:31.884976 1152207 cli_runner.go:164] Run: docker container inspect addons-564371 --format={{.State.Status}}
	I0328 21:13:31.909341 1152207 addons.go:69] Setting registry=true in profile "addons-564371"
	I0328 21:13:31.909435 1152207 addons.go:234] Setting addon registry=true in "addons-564371"
	I0328 21:13:31.909514 1152207 host.go:66] Checking if "addons-564371" exists ...
	I0328 21:13:31.909966 1152207 cli_runner.go:164] Run: docker container inspect addons-564371 --format={{.State.Status}}
	I0328 21:13:31.946462 1152207 addons.go:69] Setting storage-provisioner=true in profile "addons-564371"
	I0328 21:13:31.946528 1152207 addons.go:234] Setting addon storage-provisioner=true in "addons-564371"
	I0328 21:13:31.946585 1152207 host.go:66] Checking if "addons-564371" exists ...
	I0328 21:13:31.947047 1152207 cli_runner.go:164] Run: docker container inspect addons-564371 --format={{.State.Status}}
	I0328 21:13:31.954202 1152207 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0328 21:13:31.957209 1152207 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0328 21:13:31.957271 1152207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0328 21:13:31.957384 1152207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-564371
	I0328 21:13:31.965737 1152207 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-564371"
	I0328 21:13:31.965782 1152207 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-564371"
	I0328 21:13:31.965950 1152207 addons.go:69] Setting volumesnapshots=true in profile "addons-564371"
	I0328 21:13:31.965980 1152207 addons.go:234] Setting addon volumesnapshots=true in "addons-564371"
	I0328 21:13:31.966020 1152207 host.go:66] Checking if "addons-564371" exists ...
	I0328 21:13:31.966455 1152207 cli_runner.go:164] Run: docker container inspect addons-564371 --format={{.State.Status}}
	I0328 21:13:31.973256 1152207 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.0
	I0328 21:13:31.971374 1152207 cli_runner.go:164] Run: docker container inspect addons-564371 --format={{.State.Status}}
	I0328 21:13:31.976595 1152207 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0328 21:13:31.976607 1152207 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0328 21:13:31.976616 1152207 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0328 21:13:31.986479 1152207 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.26.0
	I0328 21:13:31.989041 1152207 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0328 21:13:31.989072 1152207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0328 21:13:31.989147 1152207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-564371
	I0328 21:13:32.001615 1152207 addons.go:234] Setting addon default-storageclass=true in "addons-564371"
	I0328 21:13:32.001669 1152207 host.go:66] Checking if "addons-564371" exists ...
	I0328 21:13:32.003836 1152207 out.go:177]   - Using image docker.io/registry:2.8.3
	I0328 21:13:32.007060 1152207 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0328 21:13:32.016260 1152207 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0328 21:13:32.016286 1152207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0328 21:13:32.016366 1152207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-564371
	I0328 21:13:32.022742 1152207 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0328 21:13:32.004048 1152207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0328 21:13:32.004776 1152207 cli_runner.go:164] Run: docker container inspect addons-564371 --format={{.State.Status}}
	I0328 21:13:32.004979 1152207 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0328 21:13:32.005081 1152207 host.go:66] Checking if "addons-564371" exists ...
	I0328 21:13:32.004032 1152207 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.15
	I0328 21:13:32.025047 1152207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-564371
	I0328 21:13:32.032381 1152207 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0328 21:13:32.042909 1152207 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.5
	I0328 21:13:32.042917 1152207 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0328 21:13:32.043773 1152207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0328 21:13:32.044631 1152207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-564371
	I0328 21:13:32.085603 1152207 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 21:13:32.087519 1152207 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 21:13:32.087543 1152207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0328 21:13:32.087698 1152207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-564371
	I0328 21:13:32.083089 1152207 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0328 21:13:32.102583 1152207 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0328 21:13:32.083190 1152207 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0328 21:13:32.106587 1152207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0328 21:13:32.106680 1152207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-564371
	I0328 21:13:32.106829 1152207 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0328 21:13:32.108944 1152207 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0328 21:13:32.108967 1152207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0328 21:13:32.109033 1152207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-564371
	I0328 21:13:32.145871 1152207 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0328 21:13:32.107166 1152207 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0328 21:13:32.150296 1152207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0328 21:13:32.156349 1152207 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0328 21:13:32.160402 1152207 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0328 21:13:32.160426 1152207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0328 21:13:32.160496 1152207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-564371
	I0328 21:13:32.167653 1152207 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-564371"
	I0328 21:13:32.167745 1152207 host.go:66] Checking if "addons-564371" exists ...
	I0328 21:13:32.170998 1152207 cli_runner.go:164] Run: docker container inspect addons-564371 --format={{.State.Status}}
	I0328 21:13:32.156552 1152207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-564371
	I0328 21:13:32.186729 1152207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34263 SSHKeyPath:/home/jenkins/minikube-integration/17877-1145955/.minikube/machines/addons-564371/id_rsa Username:docker}
	I0328 21:13:32.156358 1152207 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0328 21:13:32.158076 1152207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34263 SSHKeyPath:/home/jenkins/minikube-integration/17877-1145955/.minikube/machines/addons-564371/id_rsa Username:docker}
	I0328 21:13:32.206775 1152207 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0328 21:13:32.212588 1152207 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0328 21:13:32.215768 1152207 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0328 21:13:32.220201 1152207 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0328 21:13:32.220279 1152207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0328 21:13:32.220374 1152207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-564371
	I0328 21:13:32.229099 1152207 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0328 21:13:32.229163 1152207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0328 21:13:32.229241 1152207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-564371
	I0328 21:13:32.216719 1152207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34263 SSHKeyPath:/home/jenkins/minikube-integration/17877-1145955/.minikube/machines/addons-564371/id_rsa Username:docker}
	I0328 21:13:32.254821 1152207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34263 SSHKeyPath:/home/jenkins/minikube-integration/17877-1145955/.minikube/machines/addons-564371/id_rsa Username:docker}
	I0328 21:13:32.256835 1152207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34263 SSHKeyPath:/home/jenkins/minikube-integration/17877-1145955/.minikube/machines/addons-564371/id_rsa Username:docker}
	I0328 21:13:32.281873 1152207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34263 SSHKeyPath:/home/jenkins/minikube-integration/17877-1145955/.minikube/machines/addons-564371/id_rsa Username:docker}
	I0328 21:13:32.300380 1152207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34263 SSHKeyPath:/home/jenkins/minikube-integration/17877-1145955/.minikube/machines/addons-564371/id_rsa Username:docker}
	I0328 21:13:32.309420 1152207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34263 SSHKeyPath:/home/jenkins/minikube-integration/17877-1145955/.minikube/machines/addons-564371/id_rsa Username:docker}
	I0328 21:13:32.333385 1152207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34263 SSHKeyPath:/home/jenkins/minikube-integration/17877-1145955/.minikube/machines/addons-564371/id_rsa Username:docker}
	I0328 21:13:32.347503 1152207 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0328 21:13:32.349378 1152207 out.go:177]   - Using image docker.io/busybox:stable
	I0328 21:13:32.347299 1152207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34263 SSHKeyPath:/home/jenkins/minikube-integration/17877-1145955/.minikube/machines/addons-564371/id_rsa Username:docker}
	I0328 21:13:32.351851 1152207 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0328 21:13:32.351866 1152207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0328 21:13:32.351922 1152207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-564371
	I0328 21:13:32.371456 1152207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34263 SSHKeyPath:/home/jenkins/minikube-integration/17877-1145955/.minikube/machines/addons-564371/id_rsa Username:docker}
	I0328 21:13:32.371456 1152207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34263 SSHKeyPath:/home/jenkins/minikube-integration/17877-1145955/.minikube/machines/addons-564371/id_rsa Username:docker}
	W0328 21:13:32.373755 1152207 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	W0328 21:13:32.373792 1152207 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0328 21:13:32.373814 1152207 retry.go:31] will retry after 315.348249ms: ssh: handshake failed: EOF
	I0328 21:13:32.373792 1152207 retry.go:31] will retry after 336.704691ms: ssh: handshake failed: EOF
	I0328 21:13:32.399024 1152207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34263 SSHKeyPath:/home/jenkins/minikube-integration/17877-1145955/.minikube/machines/addons-564371/id_rsa Username:docker}
	I0328 21:13:32.440322 1152207 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 21:13:32.650677 1152207 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0328 21:13:32.650719 1152207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0328 21:13:32.737830 1152207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 21:13:32.741312 1152207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0328 21:13:32.742578 1152207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0328 21:13:32.744471 1152207 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0328 21:13:32.744494 1152207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0328 21:13:32.746674 1152207 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0328 21:13:32.746703 1152207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0328 21:13:32.789342 1152207 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0328 21:13:32.789370 1152207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0328 21:13:32.791672 1152207 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0328 21:13:32.791705 1152207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0328 21:13:32.823992 1152207 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0328 21:13:32.824040 1152207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0328 21:13:32.832362 1152207 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0328 21:13:32.832387 1152207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0328 21:13:32.842519 1152207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0328 21:13:32.851598 1152207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0328 21:13:32.864359 1152207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0328 21:13:32.894694 1152207 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0328 21:13:32.894729 1152207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0328 21:13:32.917777 1152207 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0328 21:13:32.917812 1152207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0328 21:13:32.951136 1152207 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0328 21:13:32.951164 1152207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0328 21:13:33.005908 1152207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0328 21:13:33.015038 1152207 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 21:13:33.015065 1152207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0328 21:13:33.037596 1152207 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0328 21:13:33.037624 1152207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0328 21:13:33.065740 1152207 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0328 21:13:33.065778 1152207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0328 21:13:33.151823 1152207 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0328 21:13:33.151859 1152207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0328 21:13:33.246809 1152207 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0328 21:13:33.246836 1152207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0328 21:13:33.258612 1152207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0328 21:13:33.267194 1152207 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0328 21:13:33.267222 1152207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0328 21:13:33.270526 1152207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 21:13:33.318498 1152207 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0328 21:13:33.318523 1152207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0328 21:13:33.333812 1152207 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0328 21:13:33.333841 1152207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0328 21:13:33.406043 1152207 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0328 21:13:33.406078 1152207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0328 21:13:33.419469 1152207 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0328 21:13:33.419500 1152207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0328 21:13:33.448790 1152207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0328 21:13:33.461836 1152207 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0328 21:13:33.461861 1152207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0328 21:13:33.538590 1152207 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0328 21:13:33.538618 1152207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0328 21:13:33.561275 1152207 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0328 21:13:33.561301 1152207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0328 21:13:33.617909 1152207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0328 21:13:33.730774 1152207 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0328 21:13:33.730812 1152207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0328 21:13:33.732182 1152207 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0328 21:13:33.732206 1152207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0328 21:13:33.915271 1152207 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0328 21:13:33.915302 1152207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0328 21:13:33.915582 1152207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0328 21:13:34.073006 1152207 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0328 21:13:34.073043 1152207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0328 21:13:34.200725 1152207 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.168164764s)
	I0328 21:13:34.200762 1152207 start.go:946] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0328 21:13:34.201860 1152207 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.76151143s)
	I0328 21:13:34.202609 1152207 node_ready.go:35] waiting up to 6m0s for node "addons-564371" to be "Ready" ...
	I0328 21:13:34.203950 1152207 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0328 21:13:34.203971 1152207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0328 21:13:34.330810 1152207 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0328 21:13:34.330834 1152207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0328 21:13:34.493605 1152207 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0328 21:13:34.493636 1152207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0328 21:13:34.630661 1152207 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0328 21:13:34.630700 1152207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0328 21:13:34.813994 1152207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0328 21:13:35.043532 1152207 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-564371" context rescaled to 1 replicas
	I0328 21:13:36.267959 1152207 node_ready.go:53] node "addons-564371" has status "Ready":"False"
	I0328 21:13:36.875130 1152207 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.137264199s)
	I0328 21:13:36.875237 1152207 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.133891889s)
	I0328 21:13:37.989800 1152207 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.247187918s)
	I0328 21:13:37.990220 1152207 addons.go:470] Verifying addon ingress=true in "addons-564371"
	I0328 21:13:37.993253 1152207 out.go:177] * Verifying ingress addon...
	I0328 21:13:37.989910 1152207 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.147316347s)
	I0328 21:13:37.990012 1152207 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.125622499s)
	I0328 21:13:37.990049 1152207 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.984102076s)
	I0328 21:13:37.990067 1152207 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.731425397s)
	I0328 21:13:37.990115 1152207 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.719559415s)
	I0328 21:13:37.990142 1152207 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.541320781s)
	I0328 21:13:37.989930 1152207 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.138272189s)
	I0328 21:13:37.995208 1152207 addons.go:470] Verifying addon registry=true in "addons-564371"
	I0328 21:13:37.998263 1152207 out.go:177] * Verifying registry addon...
	I0328 21:13:37.995781 1152207 addons.go:470] Verifying addon metrics-server=true in "addons-564371"
	I0328 21:13:37.996481 1152207 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0328 21:13:38.001975 1152207 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0328 21:13:38.003377 1152207 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-564371 service yakd-dashboard -n yakd-dashboard
	
	I0328 21:13:38.014482 1152207 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0328 21:13:38.014582 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:13:38.023570 1152207 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0328 21:13:38.023648 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0328 21:13:38.052006 1152207 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0328 21:13:38.070462 1152207 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.452496376s)
	W0328 21:13:38.070806 1152207 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0328 21:13:38.070858 1152207 retry.go:31] will retry after 184.406477ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0328 21:13:38.070666 1152207 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.155055271s)
	I0328 21:13:38.256040 1152207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0328 21:13:38.484177 1152207 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.670132738s)
	I0328 21:13:38.484214 1152207 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-564371"
	I0328 21:13:38.486076 1152207 out.go:177] * Verifying csi-hostpath-driver addon...
	I0328 21:13:38.489005 1152207 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0328 21:13:38.514103 1152207 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0328 21:13:38.514129 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:13:38.531053 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:13:38.532331 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:13:38.717202 1152207 node_ready.go:53] node "addons-564371" has status "Ready":"False"
	I0328 21:13:38.996895 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:13:39.045578 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:13:39.046326 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:13:39.055585 1152207 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0328 21:13:39.055674 1152207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-564371
	I0328 21:13:39.081687 1152207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34263 SSHKeyPath:/home/jenkins/minikube-integration/17877-1145955/.minikube/machines/addons-564371/id_rsa Username:docker}
	I0328 21:13:39.268337 1152207 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0328 21:13:39.303504 1152207 addons.go:234] Setting addon gcp-auth=true in "addons-564371"
	I0328 21:13:39.303555 1152207 host.go:66] Checking if "addons-564371" exists ...
	I0328 21:13:39.304007 1152207 cli_runner.go:164] Run: docker container inspect addons-564371 --format={{.State.Status}}
	I0328 21:13:39.324173 1152207 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0328 21:13:39.324227 1152207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-564371
	I0328 21:13:39.352272 1152207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34263 SSHKeyPath:/home/jenkins/minikube-integration/17877-1145955/.minikube/machines/addons-564371/id_rsa Username:docker}
	I0328 21:13:39.511497 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:13:39.533584 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:13:39.536407 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:13:39.837413 1152207 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.581329902s)
	I0328 21:13:39.840552 1152207 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0328 21:13:39.842427 1152207 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0328 21:13:39.844253 1152207 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0328 21:13:39.844304 1152207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0328 21:13:39.902406 1152207 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0328 21:13:39.902481 1152207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0328 21:13:39.971176 1152207 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0328 21:13:39.971241 1152207 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0328 21:13:39.997011 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:13:40.019211 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:13:40.020060 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:13:40.024441 1152207 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0328 21:13:40.499487 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:13:40.528652 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:13:40.530077 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:13:41.085702 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:13:41.096233 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:13:41.096800 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:13:41.114879 1152207 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.090334974s)
	I0328 21:13:41.117478 1152207 addons.go:470] Verifying addon gcp-auth=true in "addons-564371"
	I0328 21:13:41.120443 1152207 out.go:177] * Verifying gcp-auth addon...
	I0328 21:13:41.122962 1152207 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0328 21:13:41.127399 1152207 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0328 21:13:41.127428 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:13:41.207037 1152207 node_ready.go:53] node "addons-564371" has status "Ready":"False"
	I0328 21:13:41.493947 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:13:41.510630 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:13:41.514780 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:13:41.627038 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:13:41.994355 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:13:42.011322 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:13:42.012218 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:13:42.128385 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:13:42.495942 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:13:42.510986 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:13:42.511792 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:13:42.627743 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:13:42.993688 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:13:43.013520 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:13:43.014280 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:13:43.130365 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:13:43.209529 1152207 node_ready.go:53] node "addons-564371" has status "Ready":"False"
	I0328 21:13:43.495345 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:13:43.508470 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:13:43.509714 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:13:43.627177 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:13:43.994027 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:13:44.010325 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:13:44.010500 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:13:44.126869 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:13:44.494720 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:13:44.508569 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:13:44.509714 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:13:44.629305 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:13:44.995534 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:13:45.010986 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:13:45.019053 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:13:45.127432 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:13:45.493351 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:13:45.508494 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:13:45.509332 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:13:45.626835 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:13:45.706240 1152207 node_ready.go:53] node "addons-564371" has status "Ready":"False"
	I0328 21:13:45.994415 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:13:46.010080 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:13:46.010869 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:13:46.127544 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:13:46.493195 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:13:46.508356 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:13:46.508943 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:13:46.626952 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:13:46.995551 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:13:47.008405 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:13:47.009810 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:13:47.127336 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:13:47.493591 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:13:47.508403 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:13:47.509548 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:13:47.626652 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:13:47.993494 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:13:48.014827 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:13:48.016772 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:13:48.127437 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:13:48.206137 1152207 node_ready.go:53] node "addons-564371" has status "Ready":"False"
	I0328 21:13:48.494328 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:13:48.507731 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:13:48.508558 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:13:48.627003 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:13:48.994041 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:13:49.010072 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:13:49.011009 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:13:49.128008 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:13:49.493408 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:13:49.510697 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:13:49.513197 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:13:49.626901 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:13:49.994157 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:13:50.017711 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:13:50.018155 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:13:50.127692 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:13:50.206753 1152207 node_ready.go:53] node "addons-564371" has status "Ready":"False"
	I0328 21:13:50.493341 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:13:50.507529 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:13:50.508638 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:13:50.626551 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:13:50.993920 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:13:51.010685 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:13:51.011827 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:13:51.126996 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:13:51.493829 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:13:51.509506 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:13:51.511023 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:13:51.626553 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:13:51.993190 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:13:52.010268 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:13:52.010990 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:13:52.127252 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:13:52.494315 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:13:52.507464 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:13:52.508367 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:13:52.626731 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:13:52.706695 1152207 node_ready.go:53] node "addons-564371" has status "Ready":"False"
	I0328 21:13:52.993379 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:13:53.009787 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:13:53.010459 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:13:53.127457 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:13:53.494028 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:13:53.508534 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:13:53.510481 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:13:53.627184 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:13:53.993753 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:13:54.009082 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:13:54.009938 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:13:54.126611 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:13:54.494295 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:13:54.508285 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:13:54.508609 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:13:54.626953 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:13:54.994008 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:13:55.015931 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:13:55.016427 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:13:55.127026 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:13:55.206452 1152207 node_ready.go:53] node "addons-564371" has status "Ready":"False"
	I0328 21:13:55.493551 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:13:55.507629 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:13:55.508458 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:13:55.627226 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:13:55.993634 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:13:56.009365 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:13:56.010084 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:13:56.126966 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:13:56.493581 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:13:56.508081 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:13:56.508480 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:13:56.626588 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:13:56.993218 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:13:57.009049 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:13:57.010073 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:13:57.127132 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:13:57.206725 1152207 node_ready.go:53] node "addons-564371" has status "Ready":"False"
	I0328 21:13:57.493523 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:13:57.508493 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:13:57.508894 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:13:57.627191 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:13:57.993097 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:13:58.010526 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:13:58.011569 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:13:58.127425 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:13:58.493752 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:13:58.508632 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:13:58.511000 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:13:58.627184 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:13:58.993648 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:13:59.008117 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:13:59.008455 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:13:59.126659 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:13:59.493526 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:13:59.509409 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:13:59.509656 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:13:59.626875 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:13:59.706248 1152207 node_ready.go:53] node "addons-564371" has status "Ready":"False"
	I0328 21:13:59.993642 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:00.032873 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:00.045665 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:00.213427 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:00.494818 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:00.509602 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:00.510327 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:00.627982 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:01.001705 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:01.014474 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:01.015602 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:01.127490 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:01.494257 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:01.508782 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:01.509799 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:01.626909 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:01.706369 1152207 node_ready.go:53] node "addons-564371" has status "Ready":"False"
	I0328 21:14:01.993646 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:02.025386 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:02.028073 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:02.126738 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:02.493858 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:02.509066 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:02.510153 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:02.626780 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:02.993702 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:03.035606 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:03.039922 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:03.126993 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:03.493560 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:03.509155 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:03.510361 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:03.626865 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:03.706767 1152207 node_ready.go:53] node "addons-564371" has status "Ready":"False"
	I0328 21:14:03.993252 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:04.011042 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:04.012804 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:04.127164 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:04.530943 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:04.532803 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:04.541204 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:04.634589 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:04.712787 1152207 node_ready.go:49] node "addons-564371" has status "Ready":"True"
	I0328 21:14:04.712850 1152207 node_ready.go:38] duration metric: took 30.510214464s for node "addons-564371" to be "Ready" ...
	I0328 21:14:04.712897 1152207 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 21:14:04.731659 1152207 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-dqf85" in "kube-system" namespace to be "Ready" ...
	I0328 21:14:05.000353 1152207 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0328 21:14:05.000380 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:05.015543 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:05.018806 1152207 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0328 21:14:05.018834 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:05.128499 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:05.501846 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:05.511575 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:05.513738 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:05.629005 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:06.004023 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:06.032546 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:06.033904 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:06.148869 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:06.496844 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:06.510806 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:06.517877 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:06.629551 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:06.738923 1152207 pod_ready.go:102] pod "coredns-76f75df574-dqf85" in "kube-system" namespace has status "Ready":"False"
	I0328 21:14:06.995227 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:07.014609 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:07.021055 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:07.126770 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:07.252160 1152207 pod_ready.go:92] pod "coredns-76f75df574-dqf85" in "kube-system" namespace has status "Ready":"True"
	I0328 21:14:07.252186 1152207 pod_ready.go:81] duration metric: took 2.520492752s for pod "coredns-76f75df574-dqf85" in "kube-system" namespace to be "Ready" ...
	I0328 21:14:07.252209 1152207 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-564371" in "kube-system" namespace to be "Ready" ...
	I0328 21:14:07.261062 1152207 pod_ready.go:92] pod "etcd-addons-564371" in "kube-system" namespace has status "Ready":"True"
	I0328 21:14:07.261098 1152207 pod_ready.go:81] duration metric: took 8.872246ms for pod "etcd-addons-564371" in "kube-system" namespace to be "Ready" ...
	I0328 21:14:07.261113 1152207 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-564371" in "kube-system" namespace to be "Ready" ...
	I0328 21:14:07.285958 1152207 pod_ready.go:92] pod "kube-apiserver-addons-564371" in "kube-system" namespace has status "Ready":"True"
	I0328 21:14:07.285985 1152207 pod_ready.go:81] duration metric: took 24.851784ms for pod "kube-apiserver-addons-564371" in "kube-system" namespace to be "Ready" ...
	I0328 21:14:07.285998 1152207 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-564371" in "kube-system" namespace to be "Ready" ...
	I0328 21:14:07.294400 1152207 pod_ready.go:92] pod "kube-controller-manager-addons-564371" in "kube-system" namespace has status "Ready":"True"
	I0328 21:14:07.294428 1152207 pod_ready.go:81] duration metric: took 8.42241ms for pod "kube-controller-manager-addons-564371" in "kube-system" namespace to be "Ready" ...
	I0328 21:14:07.294442 1152207 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tgnbc" in "kube-system" namespace to be "Ready" ...
	I0328 21:14:07.307328 1152207 pod_ready.go:92] pod "kube-proxy-tgnbc" in "kube-system" namespace has status "Ready":"True"
	I0328 21:14:07.307364 1152207 pod_ready.go:81] duration metric: took 12.910939ms for pod "kube-proxy-tgnbc" in "kube-system" namespace to be "Ready" ...
	I0328 21:14:07.307383 1152207 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-564371" in "kube-system" namespace to be "Ready" ...
	I0328 21:14:07.495779 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:07.513634 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:07.514873 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:07.627482 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:07.646621 1152207 pod_ready.go:92] pod "kube-scheduler-addons-564371" in "kube-system" namespace has status "Ready":"True"
	I0328 21:14:07.646652 1152207 pod_ready.go:81] duration metric: took 339.26032ms for pod "kube-scheduler-addons-564371" in "kube-system" namespace to be "Ready" ...
	I0328 21:14:07.646665 1152207 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-69cf46c98-vf465" in "kube-system" namespace to be "Ready" ...
	I0328 21:14:07.996351 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:08.022791 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:08.023621 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:08.129431 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:08.497068 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:08.511768 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:08.517817 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:08.627805 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:08.994977 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:09.010823 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:09.011840 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:09.126709 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:09.495898 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:09.508409 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:09.510003 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:09.626921 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:09.653268 1152207 pod_ready.go:102] pod "metrics-server-69cf46c98-vf465" in "kube-system" namespace has status "Ready":"False"
	I0328 21:14:09.994291 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:10.012010 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:10.014808 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:10.127442 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:10.494332 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:10.508994 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:10.510169 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:10.627072 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:10.995688 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:11.011126 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:11.015910 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:11.128106 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:11.494569 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:11.509343 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:11.514597 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:11.627275 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:11.662520 1152207 pod_ready.go:102] pod "metrics-server-69cf46c98-vf465" in "kube-system" namespace has status "Ready":"False"
	I0328 21:14:11.999571 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:12.014904 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:12.016843 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:12.127694 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:12.497609 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:12.511541 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:12.514200 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:12.627749 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:12.996391 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:13.014341 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:13.017185 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:13.127716 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:13.499809 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:13.515382 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:13.518469 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:13.627823 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:13.994913 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:14.013054 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:14.014645 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:14.127504 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:14.153346 1152207 pod_ready.go:102] pod "metrics-server-69cf46c98-vf465" in "kube-system" namespace has status "Ready":"False"
	I0328 21:14:14.495214 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:14.508515 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:14.510806 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:14.626756 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:14.995687 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:15.011120 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:15.018475 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:15.127650 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:15.496281 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:15.508400 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:15.511967 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:15.626994 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:15.995610 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:16.017254 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:16.023972 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:16.127554 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:16.157936 1152207 pod_ready.go:102] pod "metrics-server-69cf46c98-vf465" in "kube-system" namespace has status "Ready":"False"
	I0328 21:14:16.498690 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:16.514875 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:16.516423 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:16.627182 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:16.994789 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:17.008837 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:17.011500 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:17.126737 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:17.498718 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:17.533651 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:17.541546 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:17.628036 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:17.995174 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:18.010812 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:18.012376 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:18.127198 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:18.175634 1152207 pod_ready.go:102] pod "metrics-server-69cf46c98-vf465" in "kube-system" namespace has status "Ready":"False"
	I0328 21:14:18.496603 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:18.512377 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:18.515205 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:18.627772 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:18.994339 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:19.014475 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:19.016586 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:19.127171 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:19.494419 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:19.507919 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:19.510329 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:19.627051 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:19.994998 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:20.020240 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:20.021027 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:20.126651 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:20.503695 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:20.512431 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:20.521272 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:20.626953 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:20.657040 1152207 pod_ready.go:102] pod "metrics-server-69cf46c98-vf465" in "kube-system" namespace has status "Ready":"False"
	I0328 21:14:20.995448 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:21.011453 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:21.017433 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:21.128719 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:21.495751 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:21.518297 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:21.529937 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:21.627109 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:21.996610 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:22.013531 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:22.014940 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:22.130791 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:22.520246 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:22.561831 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:22.567681 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:22.628396 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:22.667303 1152207 pod_ready.go:102] pod "metrics-server-69cf46c98-vf465" in "kube-system" namespace has status "Ready":"False"
	I0328 21:14:23.008952 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:23.021589 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:23.022561 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:23.130586 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:23.171002 1152207 pod_ready.go:92] pod "metrics-server-69cf46c98-vf465" in "kube-system" namespace has status "Ready":"True"
	I0328 21:14:23.171078 1152207 pod_ready.go:81] duration metric: took 15.524404383s for pod "metrics-server-69cf46c98-vf465" in "kube-system" namespace to be "Ready" ...
	I0328 21:14:23.171105 1152207 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-98h7b" in "kube-system" namespace to be "Ready" ...
	I0328 21:14:23.495165 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:23.515327 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:23.516767 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:23.631565 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:23.994868 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:24.011893 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:24.015132 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:24.128273 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:24.494590 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:24.508122 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:24.509447 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:24.626973 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:24.995279 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:25.010840 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:25.014533 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:25.127849 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:25.179635 1152207 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-98h7b" in "kube-system" namespace has status "Ready":"False"
	I0328 21:14:25.495541 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:25.510291 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:25.515420 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:25.627057 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:25.994747 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:26.011416 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:26.013525 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:26.133912 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:26.496505 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:26.511831 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:26.513745 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:26.626842 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:26.995104 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:27.012963 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:27.013610 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:27.127450 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:27.498976 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:27.522472 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:27.524516 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:27.626621 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:27.685264 1152207 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-98h7b" in "kube-system" namespace has status "Ready":"False"
	I0328 21:14:27.994913 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:28.012304 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:28.013301 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:28.134458 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:28.495326 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:28.508616 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:28.510697 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:28.627604 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:28.995162 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:29.017612 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:29.017841 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:29.129323 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:29.495728 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:29.523140 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:29.524272 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:29.626956 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:29.995914 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:30.022261 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:30.023680 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:30.128268 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:30.187920 1152207 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-98h7b" in "kube-system" namespace has status "Ready":"False"
	I0328 21:14:30.495885 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:30.508602 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:30.510120 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:30.627203 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:30.997931 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:31.011972 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:31.012616 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:31.126919 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:31.495625 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:31.510728 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:31.512387 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:31.627045 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:31.995584 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:32.012826 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:32.015979 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:32.126914 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:32.503069 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:32.515398 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:32.519447 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:32.629669 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:32.677819 1152207 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-98h7b" in "kube-system" namespace has status "Ready":"False"
	I0328 21:14:32.995088 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:33.015576 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:33.016623 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:33.127583 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:33.495337 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:33.508662 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:33.509460 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:33.627634 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:33.995756 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:34.014916 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:34.015248 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:34.126738 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:34.494592 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:34.508361 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:34.509579 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:34.628333 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:34.682364 1152207 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-98h7b" in "kube-system" namespace has status "Ready":"False"
	I0328 21:14:34.995176 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:35.015056 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:35.027315 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:35.129229 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:35.496745 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:35.512217 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:35.513556 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:35.627286 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:35.995728 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:36.015783 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:36.020121 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:36.130056 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:36.506439 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:36.510584 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:36.513114 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:36.628166 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:36.997029 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:37.010393 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:37.012393 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:37.127548 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:37.178971 1152207 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-98h7b" in "kube-system" namespace has status "Ready":"False"
	I0328 21:14:37.495734 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:37.511718 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:37.514238 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:37.627063 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:37.995298 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:38.011940 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:38.012428 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:38.126534 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:38.497125 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:38.517652 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:38.517857 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:38.627223 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:38.995056 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:39.014390 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:39.015289 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:39.127768 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:39.179359 1152207 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-98h7b" in "kube-system" namespace has status "Ready":"False"
	I0328 21:14:39.496117 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:39.512211 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:39.513192 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:39.627208 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:39.996671 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:40.011620 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:40.014887 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:40.127198 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:40.497245 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:40.512516 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:40.513870 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:40.629000 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:40.998775 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:41.015583 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:41.024603 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:41.127701 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:41.495860 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:41.510586 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:41.511519 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:41.627253 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:41.678133 1152207 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-98h7b" in "kube-system" namespace has status "Ready":"False"
	I0328 21:14:41.995461 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:42.018978 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:42.021996 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:42.127520 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:42.495278 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:42.510651 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:42.511213 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:42.626755 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:42.994827 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:43.014127 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:43.015816 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:43.127390 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:43.494194 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:43.509799 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:43.510710 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:43.627080 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:43.677825 1152207 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-98h7b" in "kube-system" namespace has status "Ready":"True"
	I0328 21:14:43.677851 1152207 pod_ready.go:81] duration metric: took 20.506725183s for pod "nvidia-device-plugin-daemonset-98h7b" in "kube-system" namespace to be "Ready" ...
	I0328 21:14:43.677896 1152207 pod_ready.go:38] duration metric: took 38.964972518s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 21:14:43.677918 1152207 api_server.go:52] waiting for apiserver process to appear ...
	I0328 21:14:43.677949 1152207 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 21:14:43.678023 1152207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 21:14:43.729533 1152207 cri.go:89] found id: "12e77a21a9fd638cda3bb435c8b196a902af41d8c834660357aeb40498c75b4d"
	I0328 21:14:43.729629 1152207 cri.go:89] found id: ""
	I0328 21:14:43.729656 1152207 logs.go:276] 1 containers: [12e77a21a9fd638cda3bb435c8b196a902af41d8c834660357aeb40498c75b4d]
	I0328 21:14:43.729814 1152207 ssh_runner.go:195] Run: which crictl
	I0328 21:14:43.738496 1152207 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 21:14:43.738628 1152207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 21:14:43.787886 1152207 cri.go:89] found id: "bb0c936489a1197905676ac1023656be44709c14bf023b6ceee30e7803dcbcd4"
	I0328 21:14:43.787905 1152207 cri.go:89] found id: ""
	I0328 21:14:43.787913 1152207 logs.go:276] 1 containers: [bb0c936489a1197905676ac1023656be44709c14bf023b6ceee30e7803dcbcd4]
	I0328 21:14:43.787968 1152207 ssh_runner.go:195] Run: which crictl
	I0328 21:14:43.791591 1152207 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 21:14:43.791670 1152207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 21:14:43.841980 1152207 cri.go:89] found id: "64c606d5eccd69d5abc91ef3d13e812505a9415493ec3d306cae3133bc8e4c82"
	I0328 21:14:43.842007 1152207 cri.go:89] found id: ""
	I0328 21:14:43.842020 1152207 logs.go:276] 1 containers: [64c606d5eccd69d5abc91ef3d13e812505a9415493ec3d306cae3133bc8e4c82]
	I0328 21:14:43.842108 1152207 ssh_runner.go:195] Run: which crictl
	I0328 21:14:43.846282 1152207 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 21:14:43.846371 1152207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 21:14:43.891172 1152207 cri.go:89] found id: "79ece69608e3978c937732d6c80e8a983e055f882a4ba16d182b60dd3c84283b"
	I0328 21:14:43.891197 1152207 cri.go:89] found id: ""
	I0328 21:14:43.891204 1152207 logs.go:276] 1 containers: [79ece69608e3978c937732d6c80e8a983e055f882a4ba16d182b60dd3c84283b]
	I0328 21:14:43.891288 1152207 ssh_runner.go:195] Run: which crictl
	I0328 21:14:43.894888 1152207 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 21:14:43.895008 1152207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 21:14:43.947778 1152207 cri.go:89] found id: "60c61cb9647ceb3f2afc221cc1b8c5e9efd0c6ff69375cd3d9cc433c844004df"
	I0328 21:14:43.947800 1152207 cri.go:89] found id: ""
	I0328 21:14:43.947808 1152207 logs.go:276] 1 containers: [60c61cb9647ceb3f2afc221cc1b8c5e9efd0c6ff69375cd3d9cc433c844004df]
	I0328 21:14:43.947914 1152207 ssh_runner.go:195] Run: which crictl
	I0328 21:14:43.951521 1152207 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 21:14:43.951634 1152207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 21:14:43.995491 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:44.011949 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:44.013868 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:44.040915 1152207 cri.go:89] found id: "855ea02ed22c52ed3dc18e33c7264cb29f781ae69177261c9fc92758cb0e1e58"
	I0328 21:14:44.040974 1152207 cri.go:89] found id: ""
	I0328 21:14:44.040995 1152207 logs.go:276] 1 containers: [855ea02ed22c52ed3dc18e33c7264cb29f781ae69177261c9fc92758cb0e1e58]
	I0328 21:14:44.041068 1152207 ssh_runner.go:195] Run: which crictl
	I0328 21:14:44.044732 1152207 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 21:14:44.044820 1152207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 21:14:44.127548 1152207 cri.go:89] found id: "180a8fae013cf08e447b5167c99e1a2ccffbbb79313a97d1dca70b91092f6595"
	I0328 21:14:44.127618 1152207 cri.go:89] found id: ""
	I0328 21:14:44.127654 1152207 logs.go:276] 1 containers: [180a8fae013cf08e447b5167c99e1a2ccffbbb79313a97d1dca70b91092f6595]
	I0328 21:14:44.127741 1152207 ssh_runner.go:195] Run: which crictl
	I0328 21:14:44.128461 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:44.140822 1152207 logs.go:123] Gathering logs for kube-proxy [60c61cb9647ceb3f2afc221cc1b8c5e9efd0c6ff69375cd3d9cc433c844004df] ...
	I0328 21:14:44.140849 1152207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60c61cb9647ceb3f2afc221cc1b8c5e9efd0c6ff69375cd3d9cc433c844004df"
	I0328 21:14:44.338781 1152207 logs.go:123] Gathering logs for kube-controller-manager [855ea02ed22c52ed3dc18e33c7264cb29f781ae69177261c9fc92758cb0e1e58] ...
	I0328 21:14:44.338808 1152207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 855ea02ed22c52ed3dc18e33c7264cb29f781ae69177261c9fc92758cb0e1e58"
	I0328 21:14:44.495487 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:44.509806 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:44.511247 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:44.569842 1152207 logs.go:123] Gathering logs for kindnet [180a8fae013cf08e447b5167c99e1a2ccffbbb79313a97d1dca70b91092f6595] ...
	I0328 21:14:44.569881 1152207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 180a8fae013cf08e447b5167c99e1a2ccffbbb79313a97d1dca70b91092f6595"
	I0328 21:14:44.627122 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:44.705840 1152207 logs.go:123] Gathering logs for container status ...
	I0328 21:14:44.705912 1152207 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 21:14:44.996317 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:45.013500 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:45.020072 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:45.129272 1152207 logs.go:123] Gathering logs for kubelet ...
	I0328 21:14:45.129358 1152207 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 21:14:45.137852 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0328 21:14:45.195758 1152207 logs.go:138] Found kubelet problem: Mar 28 21:13:31 addons-564371 kubelet[1504]: W0328 21:13:31.299862    1504 reflector.go:539] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-564371" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-564371' and this object
	W0328 21:14:45.200404 1152207 logs.go:138] Found kubelet problem: Mar 28 21:13:31 addons-564371 kubelet[1504]: E0328 21:13:31.299911    1504 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-564371" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-564371' and this object
	W0328 21:14:45.237187 1152207 logs.go:138] Found kubelet problem: Mar 28 21:14:04 addons-564371 kubelet[1504]: W0328 21:14:04.509762    1504 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-564371" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-564371' and this object
	W0328 21:14:45.237469 1152207 logs.go:138] Found kubelet problem: Mar 28 21:14:04 addons-564371 kubelet[1504]: E0328 21:14:04.509795    1504 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-564371" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-564371' and this object
	W0328 21:14:45.238561 1152207 logs.go:138] Found kubelet problem: Mar 28 21:14:04 addons-564371 kubelet[1504]: W0328 21:14:04.525992    1504 reflector.go:539] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-564371" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-564371' and this object
	W0328 21:14:45.238799 1152207 logs.go:138] Found kubelet problem: Mar 28 21:14:04 addons-564371 kubelet[1504]: E0328 21:14:04.526026    1504 reflector.go:147] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-564371" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-564371' and this object
	W0328 21:14:45.238994 1152207 logs.go:138] Found kubelet problem: Mar 28 21:14:04 addons-564371 kubelet[1504]: W0328 21:14:04.526396    1504 reflector.go:539] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-564371" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-564371' and this object
	W0328 21:14:45.239210 1152207 logs.go:138] Found kubelet problem: Mar 28 21:14:04 addons-564371 kubelet[1504]: E0328 21:14:04.526422    1504 reflector.go:147] object-"kube-system"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-564371" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-564371' and this object
	W0328 21:14:45.239573 1152207 logs.go:138] Found kubelet problem: Mar 28 21:14:04 addons-564371 kubelet[1504]: W0328 21:14:04.526689    1504 reflector.go:539] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-564371" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-564371' and this object
	W0328 21:14:45.239998 1152207 logs.go:138] Found kubelet problem: Mar 28 21:14:04 addons-564371 kubelet[1504]: E0328 21:14:04.526712    1504 reflector.go:147] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-564371" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-564371' and this object
	I0328 21:14:45.280437 1152207 logs.go:123] Gathering logs for coredns [64c606d5eccd69d5abc91ef3d13e812505a9415493ec3d306cae3133bc8e4c82] ...
	I0328 21:14:45.280768 1152207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 64c606d5eccd69d5abc91ef3d13e812505a9415493ec3d306cae3133bc8e4c82"
	I0328 21:14:45.488848 1152207 logs.go:123] Gathering logs for kube-apiserver [12e77a21a9fd638cda3bb435c8b196a902af41d8c834660357aeb40498c75b4d] ...
	I0328 21:14:45.488875 1152207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 12e77a21a9fd638cda3bb435c8b196a902af41d8c834660357aeb40498c75b4d"
	I0328 21:14:45.497426 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:45.517785 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:45.519017 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:45.631553 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:45.641202 1152207 logs.go:123] Gathering logs for etcd [bb0c936489a1197905676ac1023656be44709c14bf023b6ceee30e7803dcbcd4] ...
	I0328 21:14:45.641233 1152207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb0c936489a1197905676ac1023656be44709c14bf023b6ceee30e7803dcbcd4"
	I0328 21:14:45.733716 1152207 logs.go:123] Gathering logs for kube-scheduler [79ece69608e3978c937732d6c80e8a983e055f882a4ba16d182b60dd3c84283b] ...
	I0328 21:14:45.733796 1152207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 79ece69608e3978c937732d6c80e8a983e055f882a4ba16d182b60dd3c84283b"
	I0328 21:14:45.864629 1152207 logs.go:123] Gathering logs for CRI-O ...
	I0328 21:14:45.864668 1152207 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 21:14:45.978555 1152207 logs.go:123] Gathering logs for dmesg ...
	I0328 21:14:45.978594 1152207 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 21:14:45.994668 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:46.008240 1152207 logs.go:123] Gathering logs for describe nodes ...
	I0328 21:14:46.008277 1152207 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 21:14:46.010702 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:46.015603 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:46.128662 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:46.343713 1152207 out.go:304] Setting ErrFile to fd 2...
	I0328 21:14:46.343741 1152207 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0328 21:14:46.343795 1152207 out.go:239] X Problems detected in kubelet:
	W0328 21:14:46.343806 1152207 out.go:239]   Mar 28 21:14:04 addons-564371 kubelet[1504]: E0328 21:14:04.526026    1504 reflector.go:147] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-564371" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-564371' and this object
	W0328 21:14:46.343815 1152207 out.go:239]   Mar 28 21:14:04 addons-564371 kubelet[1504]: W0328 21:14:04.526396    1504 reflector.go:539] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-564371" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-564371' and this object
	W0328 21:14:46.343829 1152207 out.go:239]   Mar 28 21:14:04 addons-564371 kubelet[1504]: E0328 21:14:04.526422    1504 reflector.go:147] object-"kube-system"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-564371" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-564371' and this object
	W0328 21:14:46.343835 1152207 out.go:239]   Mar 28 21:14:04 addons-564371 kubelet[1504]: W0328 21:14:04.526689    1504 reflector.go:539] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-564371" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-564371' and this object
	W0328 21:14:46.343846 1152207 out.go:239]   Mar 28 21:14:04 addons-564371 kubelet[1504]: E0328 21:14:04.526712    1504 reflector.go:147] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-564371" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-564371' and this object
	I0328 21:14:46.343852 1152207 out.go:304] Setting ErrFile to fd 2...
	I0328 21:14:46.343858 1152207 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 21:14:46.495332 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:46.515919 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:46.517748 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:46.627934 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:46.994830 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:47.012357 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:47.013971 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:47.134885 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:47.499932 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:47.516683 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:47.518615 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:47.627949 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:47.997086 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:48.013674 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:48.017223 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:48.128639 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:48.495989 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:48.509073 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0328 21:14:48.514140 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:48.626625 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:48.994956 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:49.010169 1152207 kapi.go:107] duration metric: took 1m11.008193033s to wait for kubernetes.io/minikube-addons=registry ...
	I0328 21:14:49.011041 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:49.127093 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:49.495539 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:49.508717 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:49.627913 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:50.005273 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:50.013738 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:50.127353 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:50.500067 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:50.508229 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:50.627433 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:50.995600 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:51.011140 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:51.127110 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:51.494813 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:51.507999 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:51.626926 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:51.995865 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:52.010820 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:52.128008 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:52.496125 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:52.508710 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:52.629460 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:52.995336 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:53.014773 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:53.129409 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0328 21:14:53.496924 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:53.510368 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:53.627013 1152207 kapi.go:107] duration metric: took 1m12.504047603s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0328 21:14:53.629093 1152207 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-564371 cluster.
	I0328 21:14:53.631181 1152207 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0328 21:14:53.633087 1152207 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0328 21:14:53.996315 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:54.015733 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:54.497131 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:54.510635 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:54.997742 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:55.011399 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:55.519237 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:55.520405 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:55.994663 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:56.029968 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:56.345577 1152207 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 21:14:56.386919 1152207 api_server.go:72] duration metric: took 1m24.560517654s to wait for apiserver process to appear ...
	I0328 21:14:56.386993 1152207 api_server.go:88] waiting for apiserver healthz status ...
	I0328 21:14:56.387040 1152207 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 21:14:56.387130 1152207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 21:14:56.454442 1152207 cri.go:89] found id: "12e77a21a9fd638cda3bb435c8b196a902af41d8c834660357aeb40498c75b4d"
	I0328 21:14:56.454517 1152207 cri.go:89] found id: ""
	I0328 21:14:56.454540 1152207 logs.go:276] 1 containers: [12e77a21a9fd638cda3bb435c8b196a902af41d8c834660357aeb40498c75b4d]
	I0328 21:14:56.454630 1152207 ssh_runner.go:195] Run: which crictl
	I0328 21:14:56.474886 1152207 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 21:14:56.475006 1152207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 21:14:56.499617 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:56.518469 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:56.579427 1152207 cri.go:89] found id: "bb0c936489a1197905676ac1023656be44709c14bf023b6ceee30e7803dcbcd4"
	I0328 21:14:56.579451 1152207 cri.go:89] found id: ""
	I0328 21:14:56.579467 1152207 logs.go:276] 1 containers: [bb0c936489a1197905676ac1023656be44709c14bf023b6ceee30e7803dcbcd4]
	I0328 21:14:56.579522 1152207 ssh_runner.go:195] Run: which crictl
	I0328 21:14:56.583587 1152207 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 21:14:56.583670 1152207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 21:14:56.631782 1152207 cri.go:89] found id: "64c606d5eccd69d5abc91ef3d13e812505a9415493ec3d306cae3133bc8e4c82"
	I0328 21:14:56.631806 1152207 cri.go:89] found id: ""
	I0328 21:14:56.631814 1152207 logs.go:276] 1 containers: [64c606d5eccd69d5abc91ef3d13e812505a9415493ec3d306cae3133bc8e4c82]
	I0328 21:14:56.631892 1152207 ssh_runner.go:195] Run: which crictl
	I0328 21:14:56.638366 1152207 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 21:14:56.638437 1152207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 21:14:56.687434 1152207 cri.go:89] found id: "79ece69608e3978c937732d6c80e8a983e055f882a4ba16d182b60dd3c84283b"
	I0328 21:14:56.687456 1152207 cri.go:89] found id: ""
	I0328 21:14:56.687464 1152207 logs.go:276] 1 containers: [79ece69608e3978c937732d6c80e8a983e055f882a4ba16d182b60dd3c84283b]
	I0328 21:14:56.687528 1152207 ssh_runner.go:195] Run: which crictl
	I0328 21:14:56.691786 1152207 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 21:14:56.691883 1152207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 21:14:56.745273 1152207 cri.go:89] found id: "60c61cb9647ceb3f2afc221cc1b8c5e9efd0c6ff69375cd3d9cc433c844004df"
	I0328 21:14:56.745331 1152207 cri.go:89] found id: ""
	I0328 21:14:56.745352 1152207 logs.go:276] 1 containers: [60c61cb9647ceb3f2afc221cc1b8c5e9efd0c6ff69375cd3d9cc433c844004df]
	I0328 21:14:56.745419 1152207 ssh_runner.go:195] Run: which crictl
	I0328 21:14:56.750506 1152207 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 21:14:56.750613 1152207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 21:14:56.817028 1152207 cri.go:89] found id: "855ea02ed22c52ed3dc18e33c7264cb29f781ae69177261c9fc92758cb0e1e58"
	I0328 21:14:56.817089 1152207 cri.go:89] found id: ""
	I0328 21:14:56.817120 1152207 logs.go:276] 1 containers: [855ea02ed22c52ed3dc18e33c7264cb29f781ae69177261c9fc92758cb0e1e58]
	I0328 21:14:56.817195 1152207 ssh_runner.go:195] Run: which crictl
	I0328 21:14:56.821494 1152207 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 21:14:56.821604 1152207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 21:14:56.876833 1152207 cri.go:89] found id: "180a8fae013cf08e447b5167c99e1a2ccffbbb79313a97d1dca70b91092f6595"
	I0328 21:14:56.876903 1152207 cri.go:89] found id: ""
	I0328 21:14:56.876926 1152207 logs.go:276] 1 containers: [180a8fae013cf08e447b5167c99e1a2ccffbbb79313a97d1dca70b91092f6595]
	I0328 21:14:56.876998 1152207 ssh_runner.go:195] Run: which crictl
	I0328 21:14:56.881160 1152207 logs.go:123] Gathering logs for kube-proxy [60c61cb9647ceb3f2afc221cc1b8c5e9efd0c6ff69375cd3d9cc433c844004df] ...
	I0328 21:14:56.881228 1152207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60c61cb9647ceb3f2afc221cc1b8c5e9efd0c6ff69375cd3d9cc433c844004df"
	I0328 21:14:56.937853 1152207 logs.go:123] Gathering logs for kube-controller-manager [855ea02ed22c52ed3dc18e33c7264cb29f781ae69177261c9fc92758cb0e1e58] ...
	I0328 21:14:56.937929 1152207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 855ea02ed22c52ed3dc18e33c7264cb29f781ae69177261c9fc92758cb0e1e58"
	I0328 21:14:56.996076 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:57.046954 1152207 logs.go:123] Gathering logs for kindnet [180a8fae013cf08e447b5167c99e1a2ccffbbb79313a97d1dca70b91092f6595] ...
	I0328 21:14:57.047035 1152207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 180a8fae013cf08e447b5167c99e1a2ccffbbb79313a97d1dca70b91092f6595"
	I0328 21:14:57.050889 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:57.145929 1152207 logs.go:123] Gathering logs for CRI-O ...
	I0328 21:14:57.145996 1152207 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 21:14:57.254480 1152207 logs.go:123] Gathering logs for kubelet ...
	I0328 21:14:57.254554 1152207 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0328 21:14:57.311921 1152207 logs.go:138] Found kubelet problem: Mar 28 21:14:04 addons-564371 kubelet[1504]: W0328 21:14:04.509762    1504 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-564371" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-564371' and this object
	W0328 21:14:57.312225 1152207 logs.go:138] Found kubelet problem: Mar 28 21:14:04 addons-564371 kubelet[1504]: E0328 21:14:04.509795    1504 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-564371" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-564371' and this object
	W0328 21:14:57.313320 1152207 logs.go:138] Found kubelet problem: Mar 28 21:14:04 addons-564371 kubelet[1504]: W0328 21:14:04.525992    1504 reflector.go:539] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-564371" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-564371' and this object
	W0328 21:14:57.313553 1152207 logs.go:138] Found kubelet problem: Mar 28 21:14:04 addons-564371 kubelet[1504]: E0328 21:14:04.526026    1504 reflector.go:147] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-564371" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-564371' and this object
	W0328 21:14:57.313747 1152207 logs.go:138] Found kubelet problem: Mar 28 21:14:04 addons-564371 kubelet[1504]: W0328 21:14:04.526396    1504 reflector.go:539] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-564371" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-564371' and this object
	W0328 21:14:57.313955 1152207 logs.go:138] Found kubelet problem: Mar 28 21:14:04 addons-564371 kubelet[1504]: E0328 21:14:04.526422    1504 reflector.go:147] object-"kube-system"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-564371" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-564371' and this object
	W0328 21:14:57.314166 1152207 logs.go:138] Found kubelet problem: Mar 28 21:14:04 addons-564371 kubelet[1504]: W0328 21:14:04.526689    1504 reflector.go:539] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-564371" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-564371' and this object
	W0328 21:14:57.314409 1152207 logs.go:138] Found kubelet problem: Mar 28 21:14:04 addons-564371 kubelet[1504]: E0328 21:14:04.526712    1504 reflector.go:147] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-564371" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-564371' and this object
	I0328 21:14:57.351676 1152207 logs.go:123] Gathering logs for etcd [bb0c936489a1197905676ac1023656be44709c14bf023b6ceee30e7803dcbcd4] ...
	I0328 21:14:57.351754 1152207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb0c936489a1197905676ac1023656be44709c14bf023b6ceee30e7803dcbcd4"
	I0328 21:14:57.422949 1152207 logs.go:123] Gathering logs for kube-scheduler [79ece69608e3978c937732d6c80e8a983e055f882a4ba16d182b60dd3c84283b] ...
	I0328 21:14:57.423035 1152207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 79ece69608e3978c937732d6c80e8a983e055f882a4ba16d182b60dd3c84283b"
	I0328 21:14:57.489034 1152207 logs.go:123] Gathering logs for coredns [64c606d5eccd69d5abc91ef3d13e812505a9415493ec3d306cae3133bc8e4c82] ...
	I0328 21:14:57.489106 1152207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 64c606d5eccd69d5abc91ef3d13e812505a9415493ec3d306cae3133bc8e4c82"
	I0328 21:14:57.494526 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:57.508779 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:57.551929 1152207 logs.go:123] Gathering logs for container status ...
	I0328 21:14:57.552076 1152207 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 21:14:57.659053 1152207 logs.go:123] Gathering logs for dmesg ...
	I0328 21:14:57.659200 1152207 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 21:14:57.679028 1152207 logs.go:123] Gathering logs for describe nodes ...
	I0328 21:14:57.679144 1152207 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 21:14:57.884357 1152207 logs.go:123] Gathering logs for kube-apiserver [12e77a21a9fd638cda3bb435c8b196a902af41d8c834660357aeb40498c75b4d] ...
	I0328 21:14:57.884428 1152207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 12e77a21a9fd638cda3bb435c8b196a902af41d8c834660357aeb40498c75b4d"
	I0328 21:14:57.964606 1152207 out.go:304] Setting ErrFile to fd 2...
	I0328 21:14:57.964637 1152207 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0328 21:14:57.964692 1152207 out.go:239] X Problems detected in kubelet:
	W0328 21:14:57.964705 1152207 out.go:239]   Mar 28 21:14:04 addons-564371 kubelet[1504]: E0328 21:14:04.526026    1504 reflector.go:147] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-564371" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-564371' and this object
	W0328 21:14:57.964714 1152207 out.go:239]   Mar 28 21:14:04 addons-564371 kubelet[1504]: W0328 21:14:04.526396    1504 reflector.go:539] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-564371" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-564371' and this object
	W0328 21:14:57.964728 1152207 out.go:239]   Mar 28 21:14:04 addons-564371 kubelet[1504]: E0328 21:14:04.526422    1504 reflector.go:147] object-"kube-system"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-564371" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-564371' and this object
	W0328 21:14:57.964735 1152207 out.go:239]   Mar 28 21:14:04 addons-564371 kubelet[1504]: W0328 21:14:04.526689    1504 reflector.go:539] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-564371" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-564371' and this object
	W0328 21:14:57.964783 1152207 out.go:239]   Mar 28 21:14:04 addons-564371 kubelet[1504]: E0328 21:14:04.526712    1504 reflector.go:147] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-564371" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-564371' and this object
	I0328 21:14:57.964789 1152207 out.go:304] Setting ErrFile to fd 2...
	I0328 21:14:57.964796 1152207 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 21:14:57.994884 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:58.009751 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:58.497032 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:58.518509 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:58.996458 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:59.009539 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:59.497257 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:14:59.510614 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:14:59.999212 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:15:00.031604 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:15:00.496478 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:15:00.511577 1152207 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0328 21:15:00.995645 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:15:01.010187 1152207 kapi.go:107] duration metric: took 1m23.013700573s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0328 21:15:01.495638 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:15:01.995492 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:15:02.523277 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:15:02.995989 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:15:03.517559 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:15:03.996345 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:15:04.495503 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:15:04.995261 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:15:05.495882 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:15:06.004071 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:15:06.494860 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:15:06.994931 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:15:07.502496 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:15:07.965155 1152207 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0328 21:15:07.973805 1152207 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0328 21:15:07.975052 1152207 api_server.go:141] control plane version: v1.29.3
	I0328 21:15:07.975076 1152207 api_server.go:131] duration metric: took 11.588063593s to wait for apiserver health ...
	I0328 21:15:07.975085 1152207 system_pods.go:43] waiting for kube-system pods to appear ...
	I0328 21:15:07.975108 1152207 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 21:15:07.975178 1152207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 21:15:08.001045 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:15:08.081807 1152207 cri.go:89] found id: "12e77a21a9fd638cda3bb435c8b196a902af41d8c834660357aeb40498c75b4d"
	I0328 21:15:08.081832 1152207 cri.go:89] found id: ""
	I0328 21:15:08.081841 1152207 logs.go:276] 1 containers: [12e77a21a9fd638cda3bb435c8b196a902af41d8c834660357aeb40498c75b4d]
	I0328 21:15:08.081897 1152207 ssh_runner.go:195] Run: which crictl
	I0328 21:15:08.100327 1152207 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 21:15:08.100406 1152207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 21:15:08.148592 1152207 cri.go:89] found id: "bb0c936489a1197905676ac1023656be44709c14bf023b6ceee30e7803dcbcd4"
	I0328 21:15:08.148617 1152207 cri.go:89] found id: ""
	I0328 21:15:08.148624 1152207 logs.go:276] 1 containers: [bb0c936489a1197905676ac1023656be44709c14bf023b6ceee30e7803dcbcd4]
	I0328 21:15:08.148683 1152207 ssh_runner.go:195] Run: which crictl
	I0328 21:15:08.157180 1152207 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 21:15:08.157258 1152207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 21:15:08.213684 1152207 cri.go:89] found id: "64c606d5eccd69d5abc91ef3d13e812505a9415493ec3d306cae3133bc8e4c82"
	I0328 21:15:08.213709 1152207 cri.go:89] found id: ""
	I0328 21:15:08.213717 1152207 logs.go:276] 1 containers: [64c606d5eccd69d5abc91ef3d13e812505a9415493ec3d306cae3133bc8e4c82]
	I0328 21:15:08.213774 1152207 ssh_runner.go:195] Run: which crictl
	I0328 21:15:08.218502 1152207 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 21:15:08.218577 1152207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 21:15:08.260929 1152207 cri.go:89] found id: "79ece69608e3978c937732d6c80e8a983e055f882a4ba16d182b60dd3c84283b"
	I0328 21:15:08.260953 1152207 cri.go:89] found id: ""
	I0328 21:15:08.260961 1152207 logs.go:276] 1 containers: [79ece69608e3978c937732d6c80e8a983e055f882a4ba16d182b60dd3c84283b]
	I0328 21:15:08.261020 1152207 ssh_runner.go:195] Run: which crictl
	I0328 21:15:08.264713 1152207 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 21:15:08.264787 1152207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 21:15:08.305419 1152207 cri.go:89] found id: "60c61cb9647ceb3f2afc221cc1b8c5e9efd0c6ff69375cd3d9cc433c844004df"
	I0328 21:15:08.305444 1152207 cri.go:89] found id: ""
	I0328 21:15:08.305451 1152207 logs.go:276] 1 containers: [60c61cb9647ceb3f2afc221cc1b8c5e9efd0c6ff69375cd3d9cc433c844004df]
	I0328 21:15:08.305508 1152207 ssh_runner.go:195] Run: which crictl
	I0328 21:15:08.309051 1152207 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 21:15:08.309123 1152207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 21:15:08.349102 1152207 cri.go:89] found id: "855ea02ed22c52ed3dc18e33c7264cb29f781ae69177261c9fc92758cb0e1e58"
	I0328 21:15:08.349133 1152207 cri.go:89] found id: ""
	I0328 21:15:08.349142 1152207 logs.go:276] 1 containers: [855ea02ed22c52ed3dc18e33c7264cb29f781ae69177261c9fc92758cb0e1e58]
	I0328 21:15:08.349238 1152207 ssh_runner.go:195] Run: which crictl
	I0328 21:15:08.353059 1152207 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 21:15:08.353137 1152207 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 21:15:08.392997 1152207 cri.go:89] found id: "180a8fae013cf08e447b5167c99e1a2ccffbbb79313a97d1dca70b91092f6595"
	I0328 21:15:08.393022 1152207 cri.go:89] found id: ""
	I0328 21:15:08.393030 1152207 logs.go:276] 1 containers: [180a8fae013cf08e447b5167c99e1a2ccffbbb79313a97d1dca70b91092f6595]
	I0328 21:15:08.393088 1152207 ssh_runner.go:195] Run: which crictl
	I0328 21:15:08.396925 1152207 logs.go:123] Gathering logs for container status ...
	I0328 21:15:08.396950 1152207 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 21:15:08.449573 1152207 logs.go:123] Gathering logs for etcd [bb0c936489a1197905676ac1023656be44709c14bf023b6ceee30e7803dcbcd4] ...
	I0328 21:15:08.449605 1152207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb0c936489a1197905676ac1023656be44709c14bf023b6ceee30e7803dcbcd4"
	I0328 21:15:08.495540 1152207 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0328 21:15:08.511936 1152207 logs.go:123] Gathering logs for kube-controller-manager [855ea02ed22c52ed3dc18e33c7264cb29f781ae69177261c9fc92758cb0e1e58] ...
	I0328 21:15:08.511971 1152207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 855ea02ed22c52ed3dc18e33c7264cb29f781ae69177261c9fc92758cb0e1e58"
	I0328 21:15:08.609361 1152207 logs.go:123] Gathering logs for kindnet [180a8fae013cf08e447b5167c99e1a2ccffbbb79313a97d1dca70b91092f6595] ...
	I0328 21:15:08.609408 1152207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 180a8fae013cf08e447b5167c99e1a2ccffbbb79313a97d1dca70b91092f6595"
	I0328 21:15:08.654866 1152207 logs.go:123] Gathering logs for kube-apiserver [12e77a21a9fd638cda3bb435c8b196a902af41d8c834660357aeb40498c75b4d] ...
	I0328 21:15:08.654897 1152207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 12e77a21a9fd638cda3bb435c8b196a902af41d8c834660357aeb40498c75b4d"
	I0328 21:15:08.740522 1152207 logs.go:123] Gathering logs for coredns [64c606d5eccd69d5abc91ef3d13e812505a9415493ec3d306cae3133bc8e4c82] ...
	I0328 21:15:08.740565 1152207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 64c606d5eccd69d5abc91ef3d13e812505a9415493ec3d306cae3133bc8e4c82"
	I0328 21:15:08.785613 1152207 logs.go:123] Gathering logs for kube-scheduler [79ece69608e3978c937732d6c80e8a983e055f882a4ba16d182b60dd3c84283b] ...
	I0328 21:15:08.785642 1152207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 79ece69608e3978c937732d6c80e8a983e055f882a4ba16d182b60dd3c84283b"
	I0328 21:15:08.843027 1152207 logs.go:123] Gathering logs for kube-proxy [60c61cb9647ceb3f2afc221cc1b8c5e9efd0c6ff69375cd3d9cc433c844004df] ...
	I0328 21:15:08.843058 1152207 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60c61cb9647ceb3f2afc221cc1b8c5e9efd0c6ff69375cd3d9cc433c844004df"
	I0328 21:15:08.884228 1152207 logs.go:123] Gathering logs for CRI-O ...
	I0328 21:15:08.884259 1152207 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 21:15:08.984648 1152207 logs.go:123] Gathering logs for kubelet ...
	I0328 21:15:08.984685 1152207 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 21:15:08.997051 1152207 kapi.go:107] duration metric: took 1m30.508045022s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0328 21:15:09.001194 1152207 out.go:177] * Enabled addons: storage-provisioner, ingress-dns, nvidia-device-plugin, cloud-spanner, metrics-server, yakd, default-storageclass, inspektor-gadget, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I0328 21:15:09.003090 1152207 addons.go:505] duration metric: took 1m37.176422159s for enable addons: enabled=[storage-provisioner ingress-dns nvidia-device-plugin cloud-spanner metrics-server yakd default-storageclass inspektor-gadget volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	W0328 21:15:09.039617 1152207 logs.go:138] Found kubelet problem: Mar 28 21:14:04 addons-564371 kubelet[1504]: W0328 21:14:04.509762    1504 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-564371" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-564371' and this object
	W0328 21:15:09.039858 1152207 logs.go:138] Found kubelet problem: Mar 28 21:14:04 addons-564371 kubelet[1504]: E0328 21:14:04.509795    1504 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-564371" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-564371' and this object
	W0328 21:15:09.040936 1152207 logs.go:138] Found kubelet problem: Mar 28 21:14:04 addons-564371 kubelet[1504]: W0328 21:14:04.525992    1504 reflector.go:539] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-564371" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-564371' and this object
	W0328 21:15:09.041141 1152207 logs.go:138] Found kubelet problem: Mar 28 21:14:04 addons-564371 kubelet[1504]: E0328 21:14:04.526026    1504 reflector.go:147] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-564371" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-564371' and this object
	W0328 21:15:09.041306 1152207 logs.go:138] Found kubelet problem: Mar 28 21:14:04 addons-564371 kubelet[1504]: W0328 21:14:04.526396    1504 reflector.go:539] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-564371" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-564371' and this object
	W0328 21:15:09.041493 1152207 logs.go:138] Found kubelet problem: Mar 28 21:14:04 addons-564371 kubelet[1504]: E0328 21:14:04.526422    1504 reflector.go:147] object-"kube-system"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-564371" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-564371' and this object
	W0328 21:15:09.041682 1152207 logs.go:138] Found kubelet problem: Mar 28 21:14:04 addons-564371 kubelet[1504]: W0328 21:14:04.526689    1504 reflector.go:539] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-564371" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-564371' and this object
	W0328 21:15:09.041897 1152207 logs.go:138] Found kubelet problem: Mar 28 21:14:04 addons-564371 kubelet[1504]: E0328 21:14:04.526712    1504 reflector.go:147] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-564371" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-564371' and this object
	I0328 21:15:09.079034 1152207 logs.go:123] Gathering logs for dmesg ...
	I0328 21:15:09.079068 1152207 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 21:15:09.100542 1152207 logs.go:123] Gathering logs for describe nodes ...
	I0328 21:15:09.100574 1152207 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 21:15:09.254799 1152207 out.go:304] Setting ErrFile to fd 2...
	I0328 21:15:09.254826 1152207 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0328 21:15:09.254895 1152207 out.go:239] X Problems detected in kubelet:
	W0328 21:15:09.254908 1152207 out.go:239]   Mar 28 21:14:04 addons-564371 kubelet[1504]: E0328 21:14:04.526026    1504 reflector.go:147] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-564371" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-564371' and this object
	W0328 21:15:09.254924 1152207 out.go:239]   Mar 28 21:14:04 addons-564371 kubelet[1504]: W0328 21:14:04.526396    1504 reflector.go:539] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-564371" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-564371' and this object
	W0328 21:15:09.254944 1152207 out.go:239]   Mar 28 21:14:04 addons-564371 kubelet[1504]: E0328 21:14:04.526422    1504 reflector.go:147] object-"kube-system"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-564371" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-564371' and this object
	W0328 21:15:09.254952 1152207 out.go:239]   Mar 28 21:14:04 addons-564371 kubelet[1504]: W0328 21:14:04.526689    1504 reflector.go:539] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-564371" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-564371' and this object
	W0328 21:15:09.254958 1152207 out.go:239]   Mar 28 21:14:04 addons-564371 kubelet[1504]: E0328 21:14:04.526712    1504 reflector.go:147] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-564371" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-564371' and this object
	I0328 21:15:09.254972 1152207 out.go:304] Setting ErrFile to fd 2...
	I0328 21:15:09.254978 1152207 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 21:15:19.265501 1152207 system_pods.go:59] 18 kube-system pods found
	I0328 21:15:19.265542 1152207 system_pods.go:61] "coredns-76f75df574-dqf85" [92d5cbb0-8007-47c7-acf4-08a786667cf3] Running
	I0328 21:15:19.265549 1152207 system_pods.go:61] "csi-hostpath-attacher-0" [f8de6463-4eb8-4214-9dbd-6e3adfbaca0d] Running
	I0328 21:15:19.265553 1152207 system_pods.go:61] "csi-hostpath-resizer-0" [b13ea86a-6624-436e-a1b7-1365d9a7ff82] Running
	I0328 21:15:19.265557 1152207 system_pods.go:61] "csi-hostpathplugin-l54zs" [03e939fc-4930-43ac-8b24-bc46b8dc7d62] Running
	I0328 21:15:19.265562 1152207 system_pods.go:61] "etcd-addons-564371" [d0fc9ad9-62b1-4858-8400-47fb8e2d09be] Running
	I0328 21:15:19.265568 1152207 system_pods.go:61] "kindnet-kzgpd" [95065e06-97dd-4e27-beb6-cf3e3d5dcf2b] Running
	I0328 21:15:19.265572 1152207 system_pods.go:61] "kube-apiserver-addons-564371" [f6326b68-0df5-4068-9edf-089368a64110] Running
	I0328 21:15:19.265576 1152207 system_pods.go:61] "kube-controller-manager-addons-564371" [056bebb8-aa28-4e93-b300-11cdedbcc8c3] Running
	I0328 21:15:19.265619 1152207 system_pods.go:61] "kube-ingress-dns-minikube" [6c84243b-4db7-48e8-a60f-33e85a15acf6] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0328 21:15:19.265628 1152207 system_pods.go:61] "kube-proxy-tgnbc" [109ab600-88f6-48b7-9bb5-ddffdac020c2] Running
	I0328 21:15:19.265635 1152207 system_pods.go:61] "kube-scheduler-addons-564371" [8045d83f-d678-48a4-9982-2d96b1e7f1b6] Running
	I0328 21:15:19.265641 1152207 system_pods.go:61] "metrics-server-69cf46c98-vf465" [f04170f8-a288-4282-a0df-90db24d0b88e] Running
	I0328 21:15:19.265645 1152207 system_pods.go:61] "nvidia-device-plugin-daemonset-98h7b" [a89ce652-ae4e-4723-8514-ba6a7a219889] Running
	I0328 21:15:19.265657 1152207 system_pods.go:61] "registry-proxy-v2zfr" [37b97352-e798-4a32-a0d4-3808ead8f4b0] Running
	I0328 21:15:19.265660 1152207 system_pods.go:61] "registry-xs99m" [d5859bb3-d004-4e14-b8bd-94a73c9673a1] Running
	I0328 21:15:19.265664 1152207 system_pods.go:61] "snapshot-controller-58dbcc7b99-98hw9" [ff794e4c-4e91-4e28-b8a4-14bcac6c58c9] Running
	I0328 21:15:19.265668 1152207 system_pods.go:61] "snapshot-controller-58dbcc7b99-9j84b" [a08af870-af76-4356-ae10-12e024fffb61] Running
	I0328 21:15:19.265692 1152207 system_pods.go:61] "storage-provisioner" [d026ae21-08f8-4755-b26a-4ce995a1042a] Running
	I0328 21:15:19.265713 1152207 system_pods.go:74] duration metric: took 11.290620162s to wait for pod list to return data ...
	I0328 21:15:19.265723 1152207 default_sa.go:34] waiting for default service account to be created ...
	I0328 21:15:19.269001 1152207 default_sa.go:45] found service account: "default"
	I0328 21:15:19.269027 1152207 default_sa.go:55] duration metric: took 3.29451ms for default service account to be created ...
	I0328 21:15:19.269038 1152207 system_pods.go:116] waiting for k8s-apps to be running ...
	I0328 21:15:19.279007 1152207 system_pods.go:86] 18 kube-system pods found
	I0328 21:15:19.279045 1152207 system_pods.go:89] "coredns-76f75df574-dqf85" [92d5cbb0-8007-47c7-acf4-08a786667cf3] Running
	I0328 21:15:19.279053 1152207 system_pods.go:89] "csi-hostpath-attacher-0" [f8de6463-4eb8-4214-9dbd-6e3adfbaca0d] Running
	I0328 21:15:19.279059 1152207 system_pods.go:89] "csi-hostpath-resizer-0" [b13ea86a-6624-436e-a1b7-1365d9a7ff82] Running
	I0328 21:15:19.279124 1152207 system_pods.go:89] "csi-hostpathplugin-l54zs" [03e939fc-4930-43ac-8b24-bc46b8dc7d62] Running
	I0328 21:15:19.279137 1152207 system_pods.go:89] "etcd-addons-564371" [d0fc9ad9-62b1-4858-8400-47fb8e2d09be] Running
	I0328 21:15:19.279144 1152207 system_pods.go:89] "kindnet-kzgpd" [95065e06-97dd-4e27-beb6-cf3e3d5dcf2b] Running
	I0328 21:15:19.279149 1152207 system_pods.go:89] "kube-apiserver-addons-564371" [f6326b68-0df5-4068-9edf-089368a64110] Running
	I0328 21:15:19.279158 1152207 system_pods.go:89] "kube-controller-manager-addons-564371" [056bebb8-aa28-4e93-b300-11cdedbcc8c3] Running
	I0328 21:15:19.279166 1152207 system_pods.go:89] "kube-ingress-dns-minikube" [6c84243b-4db7-48e8-a60f-33e85a15acf6] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0328 21:15:19.279176 1152207 system_pods.go:89] "kube-proxy-tgnbc" [109ab600-88f6-48b7-9bb5-ddffdac020c2] Running
	I0328 21:15:19.279198 1152207 system_pods.go:89] "kube-scheduler-addons-564371" [8045d83f-d678-48a4-9982-2d96b1e7f1b6] Running
	I0328 21:15:19.279210 1152207 system_pods.go:89] "metrics-server-69cf46c98-vf465" [f04170f8-a288-4282-a0df-90db24d0b88e] Running
	I0328 21:15:19.279215 1152207 system_pods.go:89] "nvidia-device-plugin-daemonset-98h7b" [a89ce652-ae4e-4723-8514-ba6a7a219889] Running
	I0328 21:15:19.279219 1152207 system_pods.go:89] "registry-proxy-v2zfr" [37b97352-e798-4a32-a0d4-3808ead8f4b0] Running
	I0328 21:15:19.279235 1152207 system_pods.go:89] "registry-xs99m" [d5859bb3-d004-4e14-b8bd-94a73c9673a1] Running
	I0328 21:15:19.279247 1152207 system_pods.go:89] "snapshot-controller-58dbcc7b99-98hw9" [ff794e4c-4e91-4e28-b8a4-14bcac6c58c9] Running
	I0328 21:15:19.279251 1152207 system_pods.go:89] "snapshot-controller-58dbcc7b99-9j84b" [a08af870-af76-4356-ae10-12e024fffb61] Running
	I0328 21:15:19.279255 1152207 system_pods.go:89] "storage-provisioner" [d026ae21-08f8-4755-b26a-4ce995a1042a] Running
	I0328 21:15:19.279264 1152207 system_pods.go:126] duration metric: took 10.221414ms to wait for k8s-apps to be running ...
	I0328 21:15:19.279275 1152207 system_svc.go:44] waiting for kubelet service to be running ....
	I0328 21:15:19.279343 1152207 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 21:15:19.291653 1152207 system_svc.go:56] duration metric: took 12.368543ms WaitForService to wait for kubelet
	I0328 21:15:19.291681 1152207 kubeadm.go:576] duration metric: took 1m47.465284588s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 21:15:19.291703 1152207 node_conditions.go:102] verifying NodePressure condition ...
	I0328 21:15:19.294916 1152207 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0328 21:15:19.294954 1152207 node_conditions.go:123] node cpu capacity is 2
	I0328 21:15:19.294966 1152207 node_conditions.go:105] duration metric: took 3.256405ms to run NodePressure ...
	I0328 21:15:19.294980 1152207 start.go:240] waiting for startup goroutines ...
	I0328 21:15:19.294988 1152207 start.go:245] waiting for cluster config update ...
	I0328 21:15:19.295003 1152207 start.go:254] writing updated cluster config ...
	I0328 21:15:19.295304 1152207 ssh_runner.go:195] Run: rm -f paused
	I0328 21:15:19.618460 1152207 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0328 21:15:19.622944 1152207 out.go:177] * Done! kubectl is now configured to use "addons-564371" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Mar 28 21:19:01 addons-564371 crio[919]: time="2024-03-28 21:19:01.713440504Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=7891ecbd-3619-4553-9d72-a5680bd95f38 name=/runtime.v1.ImageService/ImageStatus
	Mar 28 21:19:01 addons-564371 crio[919]: time="2024-03-28 21:19:01.713682463Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7],Size_:28999827,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=7891ecbd-3619-4553-9d72-a5680bd95f38 name=/runtime.v1.ImageService/ImageStatus
	Mar 28 21:19:01 addons-564371 crio[919]: time="2024-03-28 21:19:01.714602498Z" level=info msg="Creating container: default/hello-world-app-5d77478584-zgtmd/hello-world-app" id=3a1ef72d-990f-4f93-a689-b7f2f1d759cf name=/runtime.v1.RuntimeService/CreateContainer
	Mar 28 21:19:01 addons-564371 crio[919]: time="2024-03-28 21:19:01.714694313Z" level=warning msg="Allowed annotations are specified for workload []"
	Mar 28 21:19:01 addons-564371 crio[919]: time="2024-03-28 21:19:01.743393048Z" level=info msg="Stopping container: 79ddb2e00f1ac1966cc9550bfe9a31a08caa6651670057cfd5f74381cc8d33c0 (timeout: 2s)" id=821aaa24-8028-44e2-8225-2c1d481a9f6e name=/runtime.v1.RuntimeService/StopContainer
	Mar 28 21:19:01 addons-564371 crio[919]: time="2024-03-28 21:19:01.781601603Z" level=info msg="Created container e3a0262115158b87f1431b53f7f4265e0271038ef973cfeeb37804ad4d91d8d5: default/hello-world-app-5d77478584-zgtmd/hello-world-app" id=3a1ef72d-990f-4f93-a689-b7f2f1d759cf name=/runtime.v1.RuntimeService/CreateContainer
	Mar 28 21:19:01 addons-564371 crio[919]: time="2024-03-28 21:19:01.782595656Z" level=info msg="Starting container: e3a0262115158b87f1431b53f7f4265e0271038ef973cfeeb37804ad4d91d8d5" id=68cbd0b1-7bdb-40df-9dab-bf2fd229e5a1 name=/runtime.v1.RuntimeService/StartContainer
	Mar 28 21:19:01 addons-564371 crio[919]: time="2024-03-28 21:19:01.792463063Z" level=info msg="Started container" PID=8303 containerID=e3a0262115158b87f1431b53f7f4265e0271038ef973cfeeb37804ad4d91d8d5 description=default/hello-world-app-5d77478584-zgtmd/hello-world-app id=68cbd0b1-7bdb-40df-9dab-bf2fd229e5a1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=cc4a8891b2f4ef5da7a06443ccf171eb46f259c0660cadae5ad601853b1239c0
	Mar 28 21:19:01 addons-564371 conmon[8285]: conmon e3a0262115158b87f143 <ninfo>: container 8303 exited with status 1
	Mar 28 21:19:02 addons-564371 crio[919]: time="2024-03-28 21:19:02.809549657Z" level=info msg="Removing container: 8535fd78b8dddee658f05a1bc636111f84ef61be9065a6f6c88a98296bb9c6e3" id=3396d130-fb99-4d00-a430-d9fe02bf9dbe name=/runtime.v1.RuntimeService/RemoveContainer
	Mar 28 21:19:02 addons-564371 crio[919]: time="2024-03-28 21:19:02.831350812Z" level=info msg="Removed container 8535fd78b8dddee658f05a1bc636111f84ef61be9065a6f6c88a98296bb9c6e3: default/hello-world-app-5d77478584-zgtmd/hello-world-app" id=3396d130-fb99-4d00-a430-d9fe02bf9dbe name=/runtime.v1.RuntimeService/RemoveContainer
	Mar 28 21:19:03 addons-564371 crio[919]: time="2024-03-28 21:19:03.758137398Z" level=warning msg="Stopping container 79ddb2e00f1ac1966cc9550bfe9a31a08caa6651670057cfd5f74381cc8d33c0 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=821aaa24-8028-44e2-8225-2c1d481a9f6e name=/runtime.v1.RuntimeService/StopContainer
	Mar 28 21:19:03 addons-564371 conmon[5125]: conmon 79ddb2e00f1ac1966cc9 <ninfo>: container 5136 exited with status 137
	Mar 28 21:19:03 addons-564371 crio[919]: time="2024-03-28 21:19:03.896718872Z" level=info msg="Stopped container 79ddb2e00f1ac1966cc9550bfe9a31a08caa6651670057cfd5f74381cc8d33c0: ingress-nginx/ingress-nginx-controller-65496f9567-jttrb/controller" id=821aaa24-8028-44e2-8225-2c1d481a9f6e name=/runtime.v1.RuntimeService/StopContainer
	Mar 28 21:19:03 addons-564371 crio[919]: time="2024-03-28 21:19:03.897292858Z" level=info msg="Stopping pod sandbox: cc73f12a32be2c2830140221f07c0f121c1ea2c77de58a8a5cf0ae8971192075" id=efbd8250-7be7-418e-8f9e-ad0beb405bb5 name=/runtime.v1.RuntimeService/StopPodSandbox
	Mar 28 21:19:03 addons-564371 crio[919]: time="2024-03-28 21:19:03.900456169Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-3UOLOOYRXLDIQCLB - [0:0]\n:KUBE-HP-OP4BRZYZH3JYJJ6I - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n-X KUBE-HP-OP4BRZYZH3JYJJ6I\n-X KUBE-HP-3UOLOOYRXLDIQCLB\nCOMMIT\n"
	Mar 28 21:19:03 addons-564371 crio[919]: time="2024-03-28 21:19:03.914428911Z" level=info msg="Closing host port tcp:80"
	Mar 28 21:19:03 addons-564371 crio[919]: time="2024-03-28 21:19:03.914477313Z" level=info msg="Closing host port tcp:443"
	Mar 28 21:19:03 addons-564371 crio[919]: time="2024-03-28 21:19:03.915857907Z" level=info msg="Host port tcp:80 does not have an open socket"
	Mar 28 21:19:03 addons-564371 crio[919]: time="2024-03-28 21:19:03.915892105Z" level=info msg="Host port tcp:443 does not have an open socket"
	Mar 28 21:19:03 addons-564371 crio[919]: time="2024-03-28 21:19:03.916176715Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-65496f9567-jttrb Namespace:ingress-nginx ID:cc73f12a32be2c2830140221f07c0f121c1ea2c77de58a8a5cf0ae8971192075 UID:bad1cc95-8b17-4e55-8714-73c51552d256 NetNS:/var/run/netns/4a05798e-54f6-424d-9d8e-1c3352a442e0 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Mar 28 21:19:03 addons-564371 crio[919]: time="2024-03-28 21:19:03.916377214Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-65496f9567-jttrb from CNI network \"kindnet\" (type=ptp)"
	Mar 28 21:19:03 addons-564371 crio[919]: time="2024-03-28 21:19:03.937812316Z" level=info msg="Stopped pod sandbox: cc73f12a32be2c2830140221f07c0f121c1ea2c77de58a8a5cf0ae8971192075" id=efbd8250-7be7-418e-8f9e-ad0beb405bb5 name=/runtime.v1.RuntimeService/StopPodSandbox
	Mar 28 21:19:04 addons-564371 crio[919]: time="2024-03-28 21:19:04.815084927Z" level=info msg="Removing container: 79ddb2e00f1ac1966cc9550bfe9a31a08caa6651670057cfd5f74381cc8d33c0" id=a9c4b91a-8e9d-444e-89ee-f0bd34490863 name=/runtime.v1.RuntimeService/RemoveContainer
	Mar 28 21:19:04 addons-564371 crio[919]: time="2024-03-28 21:19:04.832300708Z" level=info msg="Removed container 79ddb2e00f1ac1966cc9550bfe9a31a08caa6651670057cfd5f74381cc8d33c0: ingress-nginx/ingress-nginx-controller-65496f9567-jttrb/controller" id=a9c4b91a-8e9d-444e-89ee-f0bd34490863 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e3a0262115158       dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79                                                             7 seconds ago       Exited              hello-world-app           2                   cc4a8891b2f4e       hello-world-app-5d77478584-zgtmd
	9e1dab64edb1d       docker.io/library/nginx@sha256:31bad00311cb5eeb8a6648beadcf67277a175da89989f14727420a80e2e76742                              2 minutes ago       Running             nginx                     0                   2ba247e2ea11a       nginx
	d3e739c2948b3       ghcr.io/headlamp-k8s/headlamp@sha256:1f277f42730106526a27560517a4c5f9253ccb2477be458986f44a791158a02c                        3 minutes ago       Running             headlamp                  0                   f3502bd14e2d1       headlamp-5b77dbd7c4-ssw5x
	05299b11cd42e       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:a40e1a121ee367d1712ac3a54ec9c38c405a65dde923c98e5fa6368fa82c4b69                 4 minutes ago       Running             gcp-auth                  0                   3ead7385fb9b7       gcp-auth-7d69788767-4q5bh
	bb464e22208c8       1a024e390dd050d584b5c93bb30810e8be713157ab713b0d77a7af14dfe88c1e                                                             4 minutes ago       Exited              patch                     1                   3485aec4726e7       ingress-nginx-admission-patch-g87qz
	9b93a586798b5       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:0b1098ef00acee905f9736f98dd151af0a38d0fef0ccf9fb5ad189b20933e5f8   4 minutes ago       Exited              create                    0                   742a4ae3731c6       ingress-nginx-admission-create-6w79x
	862af4066663e       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98             4 minutes ago       Running             local-path-provisioner    0                   82179fe72b82f       local-path-provisioner-78b46b4d5c-lh98f
	79e74b131c92b       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                              4 minutes ago       Running             yakd                      0                   20962fb673d1d       yakd-dashboard-9947fc6bf-4qqbw
	64c606d5eccd6       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93                                                             5 minutes ago       Running             coredns                   0                   b25d3c9d891e5       coredns-76f75df574-dqf85
	c8e49b9bcacf7       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             5 minutes ago       Running             storage-provisioner       0                   7a77fadfe9a4d       storage-provisioner
	60c61cb9647ce       0e9b4a0d1e86d942f5ed93eaf751771e7602104cac5e15256c36967770ad2775                                                             5 minutes ago       Running             kube-proxy                0                   d259b7db0d1f1       kube-proxy-tgnbc
	180a8fae013cf       4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d                                                             5 minutes ago       Running             kindnet-cni               0                   2a09c9bacb7c3       kindnet-kzgpd
	79ece69608e39       4b51f9f6bc9b9a68473278361df0e8985109b56c7b649532c6bffcab2a8c65fb                                                             5 minutes ago       Running             kube-scheduler            0                   4fa09e7c31d6a       kube-scheduler-addons-564371
	855ea02ed22c5       121d70d9a3805f44c7c587a60d9360495cf9d95129047f4818bb7110ec1ec195                                                             5 minutes ago       Running             kube-controller-manager   0                   43795cb16136d       kube-controller-manager-addons-564371
	bb0c936489a11       014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd                                                             5 minutes ago       Running             etcd                      0                   394246a16f3f3       etcd-addons-564371
	12e77a21a9fd6       2581114f5709d3459ca39f243fd21fde75f2f60d205ffdcd57b4207c33980794                                                             5 minutes ago       Running             kube-apiserver            0                   ee625ab683da3       kube-apiserver-addons-564371
	
	
	==> coredns [64c606d5eccd69d5abc91ef3d13e812505a9415493ec3d306cae3133bc8e4c82] <==
	[INFO] 10.244.0.20:60532 - 11891 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000131658s
	[INFO] 10.244.0.20:45457 - 52501 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002530839s
	[INFO] 10.244.0.20:60532 - 63118 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001618837s
	[INFO] 10.244.0.20:45457 - 1339 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001949723s
	[INFO] 10.244.0.20:60532 - 62976 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001042668s
	[INFO] 10.244.0.20:60532 - 40715 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000126834s
	[INFO] 10.244.0.20:45457 - 13767 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00004448s
	[INFO] 10.244.0.20:50306 - 40778 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00011163s
	[INFO] 10.244.0.20:42267 - 1365 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000084438s
	[INFO] 10.244.0.20:42267 - 16656 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000163888s
	[INFO] 10.244.0.20:50306 - 28877 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000126317s
	[INFO] 10.244.0.20:42267 - 17247 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000117603s
	[INFO] 10.244.0.20:42267 - 46575 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000052242s
	[INFO] 10.244.0.20:50306 - 48395 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000102022s
	[INFO] 10.244.0.20:50306 - 21942 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000061013s
	[INFO] 10.244.0.20:42267 - 37027 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000082248s
	[INFO] 10.244.0.20:42267 - 7514 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00005417s
	[INFO] 10.244.0.20:50306 - 28273 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000080959s
	[INFO] 10.244.0.20:50306 - 44029 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000053464s
	[INFO] 10.244.0.20:42267 - 51418 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001410199s
	[INFO] 10.244.0.20:50306 - 27752 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001306691s
	[INFO] 10.244.0.20:50306 - 44564 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001049437s
	[INFO] 10.244.0.20:42267 - 34585 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001162857s
	[INFO] 10.244.0.20:42267 - 26132 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000148069s
	[INFO] 10.244.0.20:50306 - 1723 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000113419s
	
	
	==> describe nodes <==
	Name:               addons-564371
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-564371
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2883ffbf70a3cdb38617e0fd1a9bb421b3d79967
	                    minikube.k8s.io/name=addons-564371
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_28T21_13_19_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-564371
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 28 Mar 2024 21:13:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-564371
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 28 Mar 2024 21:19:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 28 Mar 2024 21:18:56 +0000   Thu, 28 Mar 2024 21:13:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 28 Mar 2024 21:18:56 +0000   Thu, 28 Mar 2024 21:13:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 28 Mar 2024 21:18:56 +0000   Thu, 28 Mar 2024 21:13:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 28 Mar 2024 21:18:56 +0000   Thu, 28 Mar 2024 21:14:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-564371
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	System Info:
	  Machine ID:                 149007c033b74d828c5ab85c4b482385
	  System UUID:                79e68f57-c553-4044-930c-ef71b799093e
	  Boot ID:                    18dd0f92-d332-41a7-aacd-d07143d316b2
	  Kernel Version:             5.15.0-1056-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-zgtmd           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m48s
	  gcp-auth                    gcp-auth-7d69788767-4q5bh                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m29s
	  headlamp                    headlamp-5b77dbd7c4-ssw5x                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m5s
	  kube-system                 coredns-76f75df574-dqf85                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     5m38s
	  kube-system                 etcd-addons-564371                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         5m50s
	  kube-system                 kindnet-kzgpd                              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      5m38s
	  kube-system                 kube-apiserver-addons-564371               250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m50s
	  kube-system                 kube-controller-manager-addons-564371      200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m50s
	  kube-system                 kube-proxy-tgnbc                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m38s
	  kube-system                 kube-scheduler-addons-564371               100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m50s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m33s
	  local-path-storage          local-path-provisioner-78b46b4d5c-lh98f    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m33s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-4qqbw             0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     5m32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             348Mi (4%!)(MISSING)  476Mi (6%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m31s                  kube-proxy       
	  Normal  Starting                 5m58s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m58s (x8 over 5m58s)  kubelet          Node addons-564371 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m58s (x8 over 5m58s)  kubelet          Node addons-564371 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m58s (x8 over 5m58s)  kubelet          Node addons-564371 status is now: NodeHasSufficientPID
	  Normal  Starting                 5m51s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m51s                  kubelet          Node addons-564371 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m51s                  kubelet          Node addons-564371 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m51s                  kubelet          Node addons-564371 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m39s                  node-controller  Node addons-564371 event: Registered Node addons-564371 in Controller
	  Normal  NodeReady                5m5s                   kubelet          Node addons-564371 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.001050] FS-Cache: O-key=[8] 'fb3e5c0100000000'
	[  +0.000847] FS-Cache: N-cookie c=00000042 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000941] FS-Cache: N-cookie d=00000000d4269778{9p.inode} n=000000006e14a8e7
	[  +0.001044] FS-Cache: N-key=[8] 'fb3e5c0100000000'
	[  +0.002988] FS-Cache: Duplicate cookie detected
	[  +0.000777] FS-Cache: O-cookie c=0000003c [p=00000039 fl=226 nc=0 na=1]
	[  +0.000958] FS-Cache: O-cookie d=00000000d4269778{9p.inode} n=00000000661996bd
	[  +0.001203] FS-Cache: O-key=[8] 'fb3e5c0100000000'
	[  +0.000712] FS-Cache: N-cookie c=00000043 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000937] FS-Cache: N-cookie d=00000000d4269778{9p.inode} n=000000000f9a2263
	[  +0.001033] FS-Cache: N-key=[8] 'fb3e5c0100000000'
	[  +2.765823] FS-Cache: Duplicate cookie detected
	[  +0.000778] FS-Cache: O-cookie c=0000003a [p=00000039 fl=226 nc=0 na=1]
	[  +0.001033] FS-Cache: O-cookie d=00000000d4269778{9p.inode} n=0000000021fd772e
	[  +0.001130] FS-Cache: O-key=[8] 'fa3e5c0100000000'
	[  +0.000852] FS-Cache: N-cookie c=00000045 [p=00000039 fl=2 nc=0 na=1]
	[  +0.001052] FS-Cache: N-cookie d=00000000d4269778{9p.inode} n=000000007f9e213c
	[  +0.001121] FS-Cache: N-key=[8] 'fa3e5c0100000000'
	[  +0.493938] FS-Cache: Duplicate cookie detected
	[  +0.000704] FS-Cache: O-cookie c=0000003f [p=00000039 fl=226 nc=0 na=1]
	[  +0.001013] FS-Cache: O-cookie d=00000000d4269778{9p.inode} n=000000004ce1b25d
	[  +0.001193] FS-Cache: O-key=[8] '003f5c0100000000'
	[  +0.000837] FS-Cache: N-cookie c=00000046 [p=00000039 fl=2 nc=0 na=1]
	[  +0.001033] FS-Cache: N-cookie d=00000000d4269778{9p.inode} n=0000000060c6253b
	[  +0.001098] FS-Cache: N-key=[8] '003f5c0100000000'
	
	
	==> etcd [bb0c936489a1197905676ac1023656be44709c14bf023b6ceee30e7803dcbcd4] <==
	{"level":"info","ts":"2024-03-28T21:13:12.239156Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-28T21:13:12.2392Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-28T21:13:12.238602Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-28T21:13:12.240193Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-28T21:13:12.240356Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-28T21:13:12.240411Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-28T21:13:12.245282Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-28T21:13:12.246236Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-03-28T21:13:32.687508Z","caller":"traceutil/trace.go:171","msg":"trace[1510326399] transaction","detail":"{read_only:false; response_revision:403; number_of_response:1; }","duration":"116.982524ms","start":"2024-03-28T21:13:32.570495Z","end":"2024-03-28T21:13:32.687477Z","steps":["trace[1510326399] 'process raft request'  (duration: 116.873823ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-28T21:13:34.772906Z","caller":"traceutil/trace.go:171","msg":"trace[1864729452] transaction","detail":"{read_only:false; response_revision:414; number_of_response:1; }","duration":"188.444712ms","start":"2024-03-28T21:13:34.584447Z","end":"2024-03-28T21:13:34.772892Z","steps":["trace[1864729452] 'process raft request'  (duration: 188.349254ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-28T21:13:34.780626Z","caller":"traceutil/trace.go:171","msg":"trace[1216964034] transaction","detail":"{read_only:false; response_revision:415; number_of_response:1; }","duration":"159.979437ms","start":"2024-03-28T21:13:34.620634Z","end":"2024-03-28T21:13:34.780613Z","steps":["trace[1216964034] 'process raft request'  (duration: 159.725417ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-28T21:13:34.931384Z","caller":"traceutil/trace.go:171","msg":"trace[943827737] linearizableReadLoop","detail":"{readStateIndex:428; appliedIndex:426; }","duration":"150.876218ms","start":"2024-03-28T21:13:34.780492Z","end":"2024-03-28T21:13:34.931368Z","steps":["trace[943827737] 'read index received'  (duration: 101.183504ms)","trace[943827737] 'applied index is now lower than readState.Index'  (duration: 49.692181ms)"],"step_count":2}
	{"level":"info","ts":"2024-03-28T21:13:34.93478Z","caller":"traceutil/trace.go:171","msg":"trace[2130812968] transaction","detail":"{read_only:false; response_revision:417; number_of_response:1; }","duration":"230.694464ms","start":"2024-03-28T21:13:34.704068Z","end":"2024-03-28T21:13:34.934762Z","steps":["trace[2130812968] 'process raft request'  (duration: 177.571733ms)","trace[2130812968] 'compare'  (duration: 49.479078ms)"],"step_count":2}
	{"level":"info","ts":"2024-03-28T21:13:34.936369Z","caller":"traceutil/trace.go:171","msg":"trace[882480059] transaction","detail":"{read_only:false; response_revision:418; number_of_response:1; }","duration":"163.611717ms","start":"2024-03-28T21:13:34.772747Z","end":"2024-03-28T21:13:34.936359Z","steps":["trace[882480059] 'process raft request'  (duration: 158.482043ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-28T21:13:34.936507Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"159.243738ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-564371\" ","response":"range_response_count:1 size:5751"}
	{"level":"info","ts":"2024-03-28T21:13:34.957703Z","caller":"traceutil/trace.go:171","msg":"trace[426702404] range","detail":"{range_begin:/registry/minions/addons-564371; range_end:; response_count:1; response_revision:418; }","duration":"180.44373ms","start":"2024-03-28T21:13:34.777241Z","end":"2024-03-28T21:13:34.957684Z","steps":["trace[426702404] 'agreement among raft nodes before linearized reading'  (duration: 159.149026ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-28T21:13:34.936566Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"159.486386ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses/standard\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-28T21:13:34.967172Z","caller":"traceutil/trace.go:171","msg":"trace[559705029] range","detail":"{range_begin:/registry/storageclasses/standard; range_end:; response_count:0; response_revision:418; }","duration":"181.418461ms","start":"2024-03-28T21:13:34.77706Z","end":"2024-03-28T21:13:34.958478Z","steps":["trace[559705029] 'agreement among raft nodes before linearized reading'  (duration: 159.474916ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-28T21:13:34.936599Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"164.044354ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiextensions.k8s.io/customresourcedefinitions/\" range_end:\"/registry/apiextensions.k8s.io/customresourcedefinitions0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-28T21:13:34.968462Z","caller":"traceutil/trace.go:171","msg":"trace[1181682784] range","detail":"{range_begin:/registry/apiextensions.k8s.io/customresourcedefinitions/; range_end:/registry/apiextensions.k8s.io/customresourcedefinitions0; response_count:0; response_revision:418; }","duration":"195.901599ms","start":"2024-03-28T21:13:34.772547Z","end":"2024-03-28T21:13:34.968449Z","steps":["trace[1181682784] 'agreement among raft nodes before linearized reading'  (duration: 164.031259ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-28T21:13:34.936643Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"164.117519ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-03-28T21:13:34.969591Z","caller":"traceutil/trace.go:171","msg":"trace[1148201728] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:418; }","duration":"197.068541ms","start":"2024-03-28T21:13:34.77251Z","end":"2024-03-28T21:13:34.969579Z","steps":["trace[1148201728] 'agreement among raft nodes before linearized reading'  (duration: 164.099427ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-28T21:13:35.292522Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"117.442221ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllers/kube-system/registry\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-28T21:13:35.292589Z","caller":"traceutil/trace.go:171","msg":"trace[1739177852] range","detail":"{range_begin:/registry/controllers/kube-system/registry; range_end:; response_count:0; response_revision:427; }","duration":"117.519431ms","start":"2024-03-28T21:13:35.175052Z","end":"2024-03-28T21:13:35.292572Z","steps":["trace[1739177852] 'agreement among raft nodes before linearized reading'  (duration: 83.107422ms)","trace[1739177852] 'range keys from in-memory index tree'  (duration: 34.313508ms)"],"step_count":2}
	{"level":"info","ts":"2024-03-28T21:13:35.292996Z","caller":"traceutil/trace.go:171","msg":"trace[1368987938] transaction","detail":"{read_only:false; response_revision:428; number_of_response:1; }","duration":"104.388292ms","start":"2024-03-28T21:13:35.188597Z","end":"2024-03-28T21:13:35.292986Z","steps":["trace[1368987938] 'process raft request'  (duration: 71.121735ms)","trace[1368987938] 'compare'  (duration: 32.901669ms)"],"step_count":2}
	
	
	==> gcp-auth [05299b11cd42eb540351b831341fb6f6d425c4a46c0aaaa3c3063077c3297e31] <==
	2024/03/28 21:14:53 GCP Auth Webhook started!
	2024/03/28 21:15:30 Ready to marshal response ...
	2024/03/28 21:15:30 Ready to write response ...
	2024/03/28 21:15:40 Ready to marshal response ...
	2024/03/28 21:15:40 Ready to write response ...
	2024/03/28 21:15:47 Ready to marshal response ...
	2024/03/28 21:15:47 Ready to write response ...
	2024/03/28 21:15:48 Ready to marshal response ...
	2024/03/28 21:15:48 Ready to write response ...
	2024/03/28 21:15:56 Ready to marshal response ...
	2024/03/28 21:15:56 Ready to write response ...
	2024/03/28 21:16:02 Ready to marshal response ...
	2024/03/28 21:16:02 Ready to write response ...
	2024/03/28 21:16:04 Ready to marshal response ...
	2024/03/28 21:16:04 Ready to write response ...
	2024/03/28 21:16:04 Ready to marshal response ...
	2024/03/28 21:16:04 Ready to write response ...
	2024/03/28 21:16:04 Ready to marshal response ...
	2024/03/28 21:16:04 Ready to write response ...
	2024/03/28 21:16:21 Ready to marshal response ...
	2024/03/28 21:16:21 Ready to write response ...
	2024/03/28 21:18:42 Ready to marshal response ...
	2024/03/28 21:18:42 Ready to write response ...
	
	
	==> kernel <==
	 21:19:09 up  5:01,  0 users,  load average: 1.24, 1.86, 2.54
	Linux addons-564371 5.15.0-1056-aws #61~20.04.1-Ubuntu SMP Wed Mar 13 17:45:04 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [180a8fae013cf08e447b5167c99e1a2ccffbbb79313a97d1dca70b91092f6595] <==
	I0328 21:17:04.400330       1 main.go:227] handling current node
	I0328 21:17:14.413010       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0328 21:17:14.413040       1 main.go:227] handling current node
	I0328 21:17:24.423438       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0328 21:17:24.423554       1 main.go:227] handling current node
	I0328 21:17:34.427599       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0328 21:17:34.427631       1 main.go:227] handling current node
	I0328 21:17:44.516500       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0328 21:17:44.516532       1 main.go:227] handling current node
	I0328 21:17:54.520371       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0328 21:17:54.520398       1 main.go:227] handling current node
	I0328 21:18:04.526192       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0328 21:18:04.526222       1 main.go:227] handling current node
	I0328 21:18:14.536854       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0328 21:18:14.536881       1 main.go:227] handling current node
	I0328 21:18:24.541110       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0328 21:18:24.541135       1 main.go:227] handling current node
	I0328 21:18:34.545100       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0328 21:18:34.545125       1 main.go:227] handling current node
	I0328 21:18:44.556922       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0328 21:18:44.557028       1 main.go:227] handling current node
	I0328 21:18:54.561415       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0328 21:18:54.561443       1 main.go:227] handling current node
	I0328 21:19:04.572795       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0328 21:19:04.572825       1 main.go:227] handling current node
	
	
	==> kube-apiserver [12e77a21a9fd638cda3bb435c8b196a902af41d8c834660357aeb40498c75b4d] <==
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0328 21:14:23.117202       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.176.217:443/apis/metrics.k8s.io/v1beta1: Get "https://10.100.176.217:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.100.176.217:443: connect: connection refused
	E0328 21:14:23.123000       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.176.217:443/apis/metrics.k8s.io/v1beta1: Get "https://10.100.176.217:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.100.176.217:443: connect: connection refused
	I0328 21:14:23.385146       1 handler.go:275] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0328 21:15:48.271722       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0328 21:16:04.711595       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.100.50.242"}
	I0328 21:16:19.189678       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0328 21:16:19.191142       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0328 21:16:19.231794       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0328 21:16:19.231916       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0328 21:16:19.238137       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0328 21:16:19.238179       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0328 21:16:19.265917       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0328 21:16:19.266427       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0328 21:16:19.300292       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0328 21:16:19.300340       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0328 21:16:20.239164       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0328 21:16:20.301014       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0328 21:16:20.351227       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0328 21:16:20.958713       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0328 21:16:21.237949       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.97.65.155"}
	I0328 21:16:24.132328       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0328 21:16:26.027343       1 handler.go:275] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0328 21:16:27.057543       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0328 21:18:43.125644       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.100.62.163"}
	
	
	==> kube-controller-manager [855ea02ed22c52ed3dc18e33c7264cb29f781ae69177261c9fc92758cb0e1e58] <==
	W0328 21:17:45.757453       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0328 21:17:45.757485       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0328 21:18:16.025047       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0328 21:18:16.025088       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0328 21:18:24.945837       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0328 21:18:24.945955       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0328 21:18:30.225290       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0328 21:18:30.225325       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0328 21:18:30.577493       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0328 21:18:30.577543       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0328 21:18:42.813637       1 event.go:376] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0328 21:18:42.846764       1 event.go:376] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-zgtmd"
	I0328 21:18:42.859531       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="46.620735ms"
	I0328 21:18:42.878416       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="18.757154ms"
	I0328 21:18:42.923678       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="45.125516ms"
	I0328 21:18:42.923867       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="50.937µs"
	I0328 21:18:46.780515       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="65.337µs"
	I0328 21:18:47.778391       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="51.922µs"
	I0328 21:18:48.794218       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="64.943µs"
	I0328 21:19:00.717544       1 job_controller.go:554] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0328 21:19:00.721241       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-65496f9567" duration="6.966µs"
	I0328 21:19:00.726071       1 job_controller.go:554] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0328 21:19:02.827508       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="66.109µs"
	W0328 21:19:04.475206       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0328 21:19:04.475240       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [60c61cb9647ceb3f2afc221cc1b8c5e9efd0c6ff69375cd3d9cc433c844004df] <==
	I0328 21:13:36.719095       1 server_others.go:72] "Using iptables proxy"
	I0328 21:13:36.911392       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0328 21:13:37.285148       1 server.go:652] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0328 21:13:37.285271       1 server_others.go:168] "Using iptables Proxier"
	I0328 21:13:37.287252       1 server_others.go:512] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0328 21:13:37.287344       1 server_others.go:529] "Defaulting to no-op detect-local"
	I0328 21:13:37.287408       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0328 21:13:37.287681       1 server.go:865] "Version info" version="v1.29.3"
	I0328 21:13:37.287897       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 21:13:37.289050       1 config.go:188] "Starting service config controller"
	I0328 21:13:37.289124       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0328 21:13:37.289169       1 config.go:97] "Starting endpoint slice config controller"
	I0328 21:13:37.289197       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0328 21:13:37.289772       1 config.go:315] "Starting node config controller"
	I0328 21:13:37.290700       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0328 21:13:37.389989       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0328 21:13:37.402442       1 shared_informer.go:318] Caches are synced for node config
	I0328 21:13:37.402472       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [79ece69608e3978c937732d6c80e8a983e055f882a4ba16d182b60dd3c84283b] <==
	W0328 21:13:15.742964       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0328 21:13:15.742995       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0328 21:13:15.743056       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0328 21:13:15.743067       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0328 21:13:15.743140       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0328 21:13:15.743156       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0328 21:13:15.743209       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0328 21:13:15.743223       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0328 21:13:15.743277       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0328 21:13:15.743325       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0328 21:13:15.743363       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0328 21:13:15.743379       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0328 21:13:15.743307       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0328 21:13:15.743434       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0328 21:13:15.743450       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0328 21:13:15.743483       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0328 21:13:15.743549       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0328 21:13:15.743565       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0328 21:13:15.743638       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0328 21:13:15.743672       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0328 21:13:15.743783       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0328 21:13:15.743802       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0328 21:13:16.665313       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0328 21:13:16.665350       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0328 21:13:19.534964       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 28 21:18:59 addons-564371 kubelet[1504]: I0328 21:18:59.174262    1504 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-ckjt9\" (UniqueName: \"kubernetes.io/projected/6c84243b-4db7-48e8-a60f-33e85a15acf6-kube-api-access-ckjt9\") on node \"addons-564371\" DevicePath \"\""
	Mar 28 21:18:59 addons-564371 kubelet[1504]: I0328 21:18:59.795710    1504 scope.go:117] "RemoveContainer" containerID="846a997de17bc651d730aee5e6c65438a2871f23d1615743815789a242c7a3ae"
	Mar 28 21:19:00 addons-564371 kubelet[1504]: I0328 21:19:00.713108    1504 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c84243b-4db7-48e8-a60f-33e85a15acf6" path="/var/lib/kubelet/pods/6c84243b-4db7-48e8-a60f-33e85a15acf6/volumes"
	Mar 28 21:19:01 addons-564371 kubelet[1504]: I0328 21:19:01.711569    1504 scope.go:117] "RemoveContainer" containerID="8535fd78b8dddee658f05a1bc636111f84ef61be9065a6f6c88a98296bb9c6e3"
	Mar 28 21:19:02 addons-564371 kubelet[1504]: I0328 21:19:02.713230    1504 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="071995bc-78e0-4371-8355-745d24f15fd9" path="/var/lib/kubelet/pods/071995bc-78e0-4371-8355-745d24f15fd9/volumes"
	Mar 28 21:19:02 addons-564371 kubelet[1504]: I0328 21:19:02.713617    1504 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3f4af97-42f5-4fb1-9681-7a7e7c37b355" path="/var/lib/kubelet/pods/b3f4af97-42f5-4fb1-9681-7a7e7c37b355/volumes"
	Mar 28 21:19:02 addons-564371 kubelet[1504]: I0328 21:19:02.807529    1504 scope.go:117] "RemoveContainer" containerID="8535fd78b8dddee658f05a1bc636111f84ef61be9065a6f6c88a98296bb9c6e3"
	Mar 28 21:19:02 addons-564371 kubelet[1504]: I0328 21:19:02.807808    1504 scope.go:117] "RemoveContainer" containerID="e3a0262115158b87f1431b53f7f4265e0271038ef973cfeeb37804ad4d91d8d5"
	Mar 28 21:19:02 addons-564371 kubelet[1504]: E0328 21:19:02.808070    1504 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5d77478584-zgtmd_default(e72182cd-fca7-4273-bac4-90c4acfa939d)\"" pod="default/hello-world-app-5d77478584-zgtmd" podUID="e72182cd-fca7-4273-bac4-90c4acfa939d"
	Mar 28 21:19:04 addons-564371 kubelet[1504]: I0328 21:19:04.126309    1504 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7sjt\" (UniqueName: \"kubernetes.io/projected/bad1cc95-8b17-4e55-8714-73c51552d256-kube-api-access-x7sjt\") pod \"bad1cc95-8b17-4e55-8714-73c51552d256\" (UID: \"bad1cc95-8b17-4e55-8714-73c51552d256\") "
	Mar 28 21:19:04 addons-564371 kubelet[1504]: I0328 21:19:04.126373    1504 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bad1cc95-8b17-4e55-8714-73c51552d256-webhook-cert\") pod \"bad1cc95-8b17-4e55-8714-73c51552d256\" (UID: \"bad1cc95-8b17-4e55-8714-73c51552d256\") "
	Mar 28 21:19:04 addons-564371 kubelet[1504]: I0328 21:19:04.132590    1504 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bad1cc95-8b17-4e55-8714-73c51552d256-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "bad1cc95-8b17-4e55-8714-73c51552d256" (UID: "bad1cc95-8b17-4e55-8714-73c51552d256"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Mar 28 21:19:04 addons-564371 kubelet[1504]: I0328 21:19:04.132952    1504 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bad1cc95-8b17-4e55-8714-73c51552d256-kube-api-access-x7sjt" (OuterVolumeSpecName: "kube-api-access-x7sjt") pod "bad1cc95-8b17-4e55-8714-73c51552d256" (UID: "bad1cc95-8b17-4e55-8714-73c51552d256"). InnerVolumeSpecName "kube-api-access-x7sjt". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 28 21:19:04 addons-564371 kubelet[1504]: I0328 21:19:04.226960    1504 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-x7sjt\" (UniqueName: \"kubernetes.io/projected/bad1cc95-8b17-4e55-8714-73c51552d256-kube-api-access-x7sjt\") on node \"addons-564371\" DevicePath \"\""
	Mar 28 21:19:04 addons-564371 kubelet[1504]: I0328 21:19:04.227006    1504 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bad1cc95-8b17-4e55-8714-73c51552d256-webhook-cert\") on node \"addons-564371\" DevicePath \"\""
	Mar 28 21:19:04 addons-564371 kubelet[1504]: I0328 21:19:04.712515    1504 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bad1cc95-8b17-4e55-8714-73c51552d256" path="/var/lib/kubelet/pods/bad1cc95-8b17-4e55-8714-73c51552d256/volumes"
	Mar 28 21:19:04 addons-564371 kubelet[1504]: I0328 21:19:04.813943    1504 scope.go:117] "RemoveContainer" containerID="79ddb2e00f1ac1966cc9550bfe9a31a08caa6651670057cfd5f74381cc8d33c0"
	Mar 28 21:19:04 addons-564371 kubelet[1504]: I0328 21:19:04.832900    1504 scope.go:117] "RemoveContainer" containerID="79ddb2e00f1ac1966cc9550bfe9a31a08caa6651670057cfd5f74381cc8d33c0"
	Mar 28 21:19:04 addons-564371 kubelet[1504]: E0328 21:19:04.833305    1504 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79ddb2e00f1ac1966cc9550bfe9a31a08caa6651670057cfd5f74381cc8d33c0\": container with ID starting with 79ddb2e00f1ac1966cc9550bfe9a31a08caa6651670057cfd5f74381cc8d33c0 not found: ID does not exist" containerID="79ddb2e00f1ac1966cc9550bfe9a31a08caa6651670057cfd5f74381cc8d33c0"
	Mar 28 21:19:04 addons-564371 kubelet[1504]: I0328 21:19:04.833356    1504 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79ddb2e00f1ac1966cc9550bfe9a31a08caa6651670057cfd5f74381cc8d33c0"} err="failed to get container status \"79ddb2e00f1ac1966cc9550bfe9a31a08caa6651670057cfd5f74381cc8d33c0\": rpc error: code = NotFound desc = could not find container \"79ddb2e00f1ac1966cc9550bfe9a31a08caa6651670057cfd5f74381cc8d33c0\": container with ID starting with 79ddb2e00f1ac1966cc9550bfe9a31a08caa6651670057cfd5f74381cc8d33c0 not found: ID does not exist"
	Mar 28 21:19:04 addons-564371 kubelet[1504]: E0328 21:19:04.906489    1504 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/56fa08ac696d00e4be799446df62eee8c11f18d2b4de3d18b731d4df93bdf28d/diff" to get inode usage: stat /var/lib/containers/storage/overlay/56fa08ac696d00e4be799446df62eee8c11f18d2b4de3d18b731d4df93bdf28d/diff: no such file or directory, extraDiskErr: <nil>
	Mar 28 21:19:05 addons-564371 kubelet[1504]: E0328 21:19:05.306325    1504 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/ec0acac9a196ff6ae5433eacd06cbe99c43b6f98fd8f8fbf48550f40de7f55f3/diff" to get inode usage: stat /var/lib/containers/storage/overlay/ec0acac9a196ff6ae5433eacd06cbe99c43b6f98fd8f8fbf48550f40de7f55f3/diff: no such file or directory, extraDiskErr: <nil>
	Mar 28 21:19:05 addons-564371 kubelet[1504]: E0328 21:19:05.391122    1504 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/29d40e37f3d2c9421b6bbfb1bd9a1c985b4f2e68a0aa10943e81a2b1d9146cbf/diff" to get inode usage: stat /var/lib/containers/storage/overlay/29d40e37f3d2c9421b6bbfb1bd9a1c985b4f2e68a0aa10943e81a2b1d9146cbf/diff: no such file or directory, extraDiskErr: <nil>
	Mar 28 21:19:06 addons-564371 kubelet[1504]: E0328 21:19:06.258156    1504 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/1266c2ef3b42efba8b18ff952db591ae786048794c21f4abd7a31b2436e966c6/diff" to get inode usage: stat /var/lib/containers/storage/overlay/1266c2ef3b42efba8b18ff952db591ae786048794c21f4abd7a31b2436e966c6/diff: no such file or directory, extraDiskErr: <nil>
	Mar 28 21:19:06 addons-564371 kubelet[1504]: E0328 21:19:06.406553    1504 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/0903e16975c59e0e5cd880d2634e58e7186ad802583953c0f25ce32799b06770/diff" to get inode usage: stat /var/lib/containers/storage/overlay/0903e16975c59e0e5cd880d2634e58e7186ad802583953c0f25ce32799b06770/diff: no such file or directory, extraDiskErr: <nil>
	
	
	==> storage-provisioner [c8e49b9bcacf776929e9143a5069e8384580d4ef6748eaa95febd01cf27d58e4] <==
	I0328 21:14:05.488896       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0328 21:14:05.518119       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0328 21:14:05.518718       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0328 21:14:05.529903       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0328 21:14:05.530157       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e26d570b-6692-4e08-8538-6159644e262a", APIVersion:"v1", ResourceVersion:"925", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-564371_56cc5e4d-0dce-4f96-91bb-b5b2b41aa049 became leader
	I0328 21:14:05.531484       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-564371_56cc5e4d-0dce-4f96-91bb-b5b2b41aa049!
	I0328 21:14:05.633365       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-564371_56cc5e4d-0dce-4f96-91bb-b5b2b41aa049!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-564371 -n addons-564371
helpers_test.go:261: (dbg) Run:  kubectl --context addons-564371 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (169.71s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (383.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-633693 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-633693 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 102 (6m20.383945433s)

                                                
                                                
-- stdout --
	* [old-k8s-version-633693] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17877
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17877-1145955/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17877-1145955/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-633693" primary control-plane node in "old-k8s-version-633693" cluster
	* Pulling base image v0.0.43-1711559786-18485 ...
	* Restarting existing docker container for "old-k8s-version-633693" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.24.6 ...
	* Verifying Kubernetes components...
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-633693 addons enable metrics-server
	
	* Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 22:06:20.457738 1338826 out.go:291] Setting OutFile to fd 1 ...
	I0328 22:06:20.457942 1338826 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 22:06:20.457968 1338826 out.go:304] Setting ErrFile to fd 2...
	I0328 22:06:20.457985 1338826 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 22:06:20.458264 1338826 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17877-1145955/.minikube/bin
	I0328 22:06:20.458672 1338826 out.go:298] Setting JSON to false
	I0328 22:06:20.459640 1338826 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":20931,"bootTime":1711642650,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1056-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0328 22:06:20.459741 1338826 start.go:139] virtualization:  
	I0328 22:06:20.463439 1338826 out.go:177] * [old-k8s-version-633693] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	I0328 22:06:20.465745 1338826 out.go:177]   - MINIKUBE_LOCATION=17877
	I0328 22:06:20.465819 1338826 notify.go:220] Checking for updates...
	I0328 22:06:20.468803 1338826 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 22:06:20.471354 1338826 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17877-1145955/kubeconfig
	I0328 22:06:20.473216 1338826 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17877-1145955/.minikube
	I0328 22:06:20.475099 1338826 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0328 22:06:20.476838 1338826 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 22:06:20.479428 1338826 config.go:182] Loaded profile config "old-k8s-version-633693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0328 22:06:20.483742 1338826 out.go:177] * Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	I0328 22:06:20.485706 1338826 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 22:06:20.508597 1338826 docker.go:122] docker version: linux-26.0.0:Docker Engine - Community
	I0328 22:06:20.508708 1338826 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0328 22:06:20.616696 1338826 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:59 SystemTime:2024-03-28 22:06:20.604272526 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0328 22:06:20.616798 1338826 docker.go:295] overlay module found
	I0328 22:06:20.619546 1338826 out.go:177] * Using the docker driver based on existing profile
	I0328 22:06:20.621460 1338826 start.go:297] selected driver: docker
	I0328 22:06:20.621479 1338826 start.go:901] validating driver "docker" against &{Name:old-k8s-version-633693 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-633693 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountStr
ing:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 22:06:20.621586 1338826 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 22:06:20.622187 1338826 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0328 22:06:20.696601 1338826 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:59 SystemTime:2024-03-28 22:06:20.679942522 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0328 22:06:20.696929 1338826 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 22:06:20.696986 1338826 cni.go:84] Creating CNI manager for ""
	I0328 22:06:20.696995 1338826 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0328 22:06:20.697035 1338826 start.go:340] cluster config:
	{Name:old-k8s-version-633693 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-633693 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 22:06:20.700008 1338826 out.go:177] * Starting "old-k8s-version-633693" primary control-plane node in "old-k8s-version-633693" cluster
	I0328 22:06:20.702382 1338826 cache.go:121] Beginning downloading kic base image for docker with crio
	I0328 22:06:20.704472 1338826 out.go:177] * Pulling base image v0.0.43-1711559786-18485 ...
	I0328 22:06:20.706697 1338826 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0328 22:06:20.706747 1338826 preload.go:147] Found local preload: /home/jenkins/minikube-integration/17877-1145955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I0328 22:06:20.706769 1338826 cache.go:56] Caching tarball of preloaded images
	I0328 22:06:20.706850 1338826 preload.go:173] Found /home/jenkins/minikube-integration/17877-1145955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0328 22:06:20.706858 1338826 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0328 22:06:20.706964 1338826 profile.go:143] Saving config to /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/old-k8s-version-633693/config.json ...
	I0328 22:06:20.707195 1338826 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 in local docker daemon
	I0328 22:06:20.725508 1338826 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 in local docker daemon, skipping pull
	I0328 22:06:20.725531 1338826 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 exists in daemon, skipping load
	I0328 22:06:20.725549 1338826 cache.go:194] Successfully downloaded all kic artifacts
	I0328 22:06:20.725578 1338826 start.go:360] acquireMachinesLock for old-k8s-version-633693: {Name:mk3fbfb43af77d465332b66bdb01d7d737487536 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 22:06:20.725638 1338826 start.go:364] duration metric: took 39.623µs to acquireMachinesLock for "old-k8s-version-633693"
	I0328 22:06:20.725656 1338826 start.go:96] Skipping create...Using existing machine configuration
	I0328 22:06:20.725662 1338826 fix.go:54] fixHost starting: 
	I0328 22:06:20.725929 1338826 cli_runner.go:164] Run: docker container inspect old-k8s-version-633693 --format={{.State.Status}}
	I0328 22:06:20.742622 1338826 fix.go:112] recreateIfNeeded on old-k8s-version-633693: state=Stopped err=<nil>
	W0328 22:06:20.742658 1338826 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 22:06:20.745084 1338826 out.go:177] * Restarting existing docker container for "old-k8s-version-633693" ...
	I0328 22:06:20.746945 1338826 cli_runner.go:164] Run: docker start old-k8s-version-633693
	I0328 22:06:21.025411 1338826 cli_runner.go:164] Run: docker container inspect old-k8s-version-633693 --format={{.State.Status}}
	I0328 22:06:21.047957 1338826 kic.go:430] container "old-k8s-version-633693" state is running.
	I0328 22:06:21.048372 1338826 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-633693
	I0328 22:06:21.075027 1338826 profile.go:143] Saving config to /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/old-k8s-version-633693/config.json ...
	I0328 22:06:21.075277 1338826 machine.go:94] provisionDockerMachine start ...
	I0328 22:06:21.075338 1338826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-633693
	I0328 22:06:21.102941 1338826 main.go:141] libmachine: Using SSH client type: native
	I0328 22:06:21.103220 1338826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 34554 <nil> <nil>}
	I0328 22:06:21.103231 1338826 main.go:141] libmachine: About to run SSH command:
	hostname
	I0328 22:06:21.103934 1338826 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0328 22:06:24.255358 1338826 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-633693
	
	I0328 22:06:24.255385 1338826 ubuntu.go:169] provisioning hostname "old-k8s-version-633693"
	I0328 22:06:24.255475 1338826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-633693
	I0328 22:06:24.272004 1338826 main.go:141] libmachine: Using SSH client type: native
	I0328 22:06:24.272399 1338826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 34554 <nil> <nil>}
	I0328 22:06:24.272419 1338826 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-633693 && echo "old-k8s-version-633693" | sudo tee /etc/hostname
	I0328 22:06:24.432155 1338826 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-633693
	
	I0328 22:06:24.432334 1338826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-633693
	I0328 22:06:24.457681 1338826 main.go:141] libmachine: Using SSH client type: native
	I0328 22:06:24.457918 1338826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 34554 <nil> <nil>}
	I0328 22:06:24.457936 1338826 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-633693' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-633693/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-633693' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0328 22:06:24.600215 1338826 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 22:06:24.600295 1338826 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17877-1145955/.minikube CaCertPath:/home/jenkins/minikube-integration/17877-1145955/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17877-1145955/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17877-1145955/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17877-1145955/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17877-1145955/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17877-1145955/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17877-1145955/.minikube}
	I0328 22:06:24.600341 1338826 ubuntu.go:177] setting up certificates
	I0328 22:06:24.600381 1338826 provision.go:84] configureAuth start
	I0328 22:06:24.600491 1338826 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-633693
	I0328 22:06:24.614561 1338826 provision.go:143] copyHostCerts
	I0328 22:06:24.614631 1338826 exec_runner.go:144] found /home/jenkins/minikube-integration/17877-1145955/.minikube/cert.pem, removing ...
	I0328 22:06:24.614652 1338826 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17877-1145955/.minikube/cert.pem
	I0328 22:06:24.614743 1338826 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17877-1145955/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17877-1145955/.minikube/cert.pem (1123 bytes)
	I0328 22:06:24.614841 1338826 exec_runner.go:144] found /home/jenkins/minikube-integration/17877-1145955/.minikube/key.pem, removing ...
	I0328 22:06:24.614850 1338826 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17877-1145955/.minikube/key.pem
	I0328 22:06:24.614878 1338826 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17877-1145955/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17877-1145955/.minikube/key.pem (1679 bytes)
	I0328 22:06:24.614941 1338826 exec_runner.go:144] found /home/jenkins/minikube-integration/17877-1145955/.minikube/ca.pem, removing ...
	I0328 22:06:24.614956 1338826 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17877-1145955/.minikube/ca.pem
	I0328 22:06:24.614982 1338826 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17877-1145955/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17877-1145955/.minikube/ca.pem (1082 bytes)
	I0328 22:06:24.615033 1338826 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17877-1145955/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17877-1145955/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17877-1145955/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-633693 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-633693]
	I0328 22:06:25.066127 1338826 provision.go:177] copyRemoteCerts
	I0328 22:06:25.066207 1338826 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0328 22:06:25.066254 1338826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-633693
	I0328 22:06:25.084182 1338826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34554 SSHKeyPath:/home/jenkins/minikube-integration/17877-1145955/.minikube/machines/old-k8s-version-633693/id_rsa Username:docker}
	I0328 22:06:25.195157 1338826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17877-1145955/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0328 22:06:25.228220 1338826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17877-1145955/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0328 22:06:25.258013 1338826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17877-1145955/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0328 22:06:25.309022 1338826 provision.go:87] duration metric: took 708.608266ms to configureAuth
	I0328 22:06:25.309052 1338826 ubuntu.go:193] setting minikube options for container-runtime
	I0328 22:06:25.309258 1338826 config.go:182] Loaded profile config "old-k8s-version-633693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0328 22:06:25.309362 1338826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-633693
	I0328 22:06:25.332398 1338826 main.go:141] libmachine: Using SSH client type: native
	I0328 22:06:25.332701 1338826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 34554 <nil> <nil>}
	I0328 22:06:25.332722 1338826 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0328 22:06:25.792200 1338826 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0328 22:06:25.792234 1338826 machine.go:97] duration metric: took 4.716947343s to provisionDockerMachine
	I0328 22:06:25.792246 1338826 start.go:293] postStartSetup for "old-k8s-version-633693" (driver="docker")
	I0328 22:06:25.792258 1338826 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0328 22:06:25.792329 1338826 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0328 22:06:25.792375 1338826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-633693
	I0328 22:06:25.813520 1338826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34554 SSHKeyPath:/home/jenkins/minikube-integration/17877-1145955/.minikube/machines/old-k8s-version-633693/id_rsa Username:docker}
	I0328 22:06:25.918481 1338826 ssh_runner.go:195] Run: cat /etc/os-release
	I0328 22:06:25.922423 1338826 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0328 22:06:25.922457 1338826 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0328 22:06:25.922467 1338826 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0328 22:06:25.922474 1338826 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0328 22:06:25.922485 1338826 filesync.go:126] Scanning /home/jenkins/minikube-integration/17877-1145955/.minikube/addons for local assets ...
	I0328 22:06:25.922541 1338826 filesync.go:126] Scanning /home/jenkins/minikube-integration/17877-1145955/.minikube/files for local assets ...
	I0328 22:06:25.922625 1338826 filesync.go:149] local asset: /home/jenkins/minikube-integration/17877-1145955/.minikube/files/etc/ssl/certs/11513632.pem -> 11513632.pem in /etc/ssl/certs
	I0328 22:06:25.922732 1338826 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0328 22:06:25.932532 1338826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17877-1145955/.minikube/files/etc/ssl/certs/11513632.pem --> /etc/ssl/certs/11513632.pem (1708 bytes)
	I0328 22:06:25.959057 1338826 start.go:296] duration metric: took 166.794749ms for postStartSetup
	I0328 22:06:25.959225 1338826 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0328 22:06:25.959289 1338826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-633693
	I0328 22:06:25.974030 1338826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34554 SSHKeyPath:/home/jenkins/minikube-integration/17877-1145955/.minikube/machines/old-k8s-version-633693/id_rsa Username:docker}
	I0328 22:06:26.073166 1338826 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0328 22:06:26.078612 1338826 fix.go:56] duration metric: took 5.352940797s for fixHost
	I0328 22:06:26.078640 1338826 start.go:83] releasing machines lock for "old-k8s-version-633693", held for 5.352993925s
	I0328 22:06:26.078720 1338826 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-633693
	I0328 22:06:26.103482 1338826 ssh_runner.go:195] Run: cat /version.json
	I0328 22:06:26.103540 1338826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-633693
	I0328 22:06:26.103780 1338826 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0328 22:06:26.103830 1338826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-633693
	I0328 22:06:26.133765 1338826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34554 SSHKeyPath:/home/jenkins/minikube-integration/17877-1145955/.minikube/machines/old-k8s-version-633693/id_rsa Username:docker}
	I0328 22:06:26.137934 1338826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34554 SSHKeyPath:/home/jenkins/minikube-integration/17877-1145955/.minikube/machines/old-k8s-version-633693/id_rsa Username:docker}
	I0328 22:06:26.236104 1338826 ssh_runner.go:195] Run: systemctl --version
	I0328 22:06:26.357280 1338826 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0328 22:06:26.512404 1338826 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0328 22:06:26.517443 1338826 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0328 22:06:26.527716 1338826 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0328 22:06:26.527806 1338826 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0328 22:06:26.537233 1338826 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0328 22:06:26.537261 1338826 start.go:494] detecting cgroup driver to use...
	I0328 22:06:26.537293 1338826 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0328 22:06:26.537356 1338826 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0328 22:06:26.552157 1338826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0328 22:06:26.565388 1338826 docker.go:217] disabling cri-docker service (if available) ...
	I0328 22:06:26.565456 1338826 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0328 22:06:26.579489 1338826 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0328 22:06:26.592706 1338826 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0328 22:06:26.707888 1338826 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0328 22:06:26.819405 1338826 docker.go:233] disabling docker service ...
	I0328 22:06:26.819531 1338826 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0328 22:06:26.834077 1338826 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0328 22:06:26.846469 1338826 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0328 22:06:26.955411 1338826 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0328 22:06:27.107077 1338826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0328 22:06:27.124814 1338826 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 22:06:27.153638 1338826 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0328 22:06:27.153767 1338826 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 22:06:27.164067 1338826 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0328 22:06:27.164212 1338826 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 22:06:27.174649 1338826 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 22:06:27.185299 1338826 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 22:06:27.195699 1338826 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0328 22:06:27.205515 1338826 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0328 22:06:27.214861 1338826 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0328 22:06:27.223920 1338826 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 22:06:27.379449 1338826 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0328 22:06:28.463286 1338826 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.08374705s)
	I0328 22:06:28.463376 1338826 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0328 22:06:28.463468 1338826 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0328 22:06:28.468081 1338826 start.go:562] Will wait 60s for crictl version
	I0328 22:06:28.468237 1338826 ssh_runner.go:195] Run: which crictl
	I0328 22:06:28.473254 1338826 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0328 22:06:28.527549 1338826 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0328 22:06:28.527703 1338826 ssh_runner.go:195] Run: crio --version
	I0328 22:06:28.570021 1338826 ssh_runner.go:195] Run: crio --version
	I0328 22:06:28.661897 1338826 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.24.6 ...
	I0328 22:06:28.664443 1338826 cli_runner.go:164] Run: docker network inspect old-k8s-version-633693 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0328 22:06:28.695642 1338826 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0328 22:06:28.699406 1338826 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 22:06:28.709557 1338826 kubeadm.go:877] updating cluster {Name:old-k8s-version-633693 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-633693 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins
:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0328 22:06:28.709689 1338826 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0328 22:06:28.709751 1338826 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 22:06:28.769089 1338826 crio.go:514] all images are preloaded for cri-o runtime.
	I0328 22:06:28.769114 1338826 crio.go:433] Images already preloaded, skipping extraction
	I0328 22:06:28.769199 1338826 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 22:06:28.814249 1338826 crio.go:514] all images are preloaded for cri-o runtime.
	I0328 22:06:28.814274 1338826 cache_images.go:84] Images are preloaded, skipping loading
	I0328 22:06:28.814283 1338826 kubeadm.go:928] updating node { 192.168.76.2 8443 v1.20.0 crio true true} ...
	I0328 22:06:28.814405 1338826 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=old-k8s-version-633693 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-633693 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0328 22:06:28.814494 1338826 ssh_runner.go:195] Run: crio config
	I0328 22:06:28.934128 1338826 cni.go:84] Creating CNI manager for ""
	I0328 22:06:28.934152 1338826 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0328 22:06:28.934164 1338826 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0328 22:06:28.934205 1338826 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-633693 NodeName:old-k8s-version-633693 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0328 22:06:28.934380 1338826 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-633693"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0328 22:06:28.934468 1338826 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0328 22:06:28.943478 1338826 binaries.go:44] Found k8s binaries, skipping transfer
	I0328 22:06:28.943568 1338826 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0328 22:06:28.951834 1338826 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (480 bytes)
	I0328 22:06:28.971640 1338826 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0328 22:06:28.990717 1338826 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2117 bytes)
	I0328 22:06:29.013678 1338826 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0328 22:06:29.017659 1338826 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 22:06:29.029010 1338826 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 22:06:29.162983 1338826 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 22:06:29.181472 1338826 certs.go:68] Setting up /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/old-k8s-version-633693 for IP: 192.168.76.2
	I0328 22:06:29.181495 1338826 certs.go:194] generating shared ca certs ...
	I0328 22:06:29.181511 1338826 certs.go:226] acquiring lock for ca certs: {Name:mk1e4b3d6020f96643d0b806687ddcafb6824b0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 22:06:29.181723 1338826 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17877-1145955/.minikube/ca.key
	I0328 22:06:29.181800 1338826 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17877-1145955/.minikube/proxy-client-ca.key
	I0328 22:06:29.181815 1338826 certs.go:256] generating profile certs ...
	I0328 22:06:29.181923 1338826 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/old-k8s-version-633693/client.key
	I0328 22:06:29.182007 1338826 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/old-k8s-version-633693/apiserver.key.e34860cc
	I0328 22:06:29.182070 1338826 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/old-k8s-version-633693/proxy-client.key
	I0328 22:06:29.182202 1338826 certs.go:484] found cert: /home/jenkins/minikube-integration/17877-1145955/.minikube/certs/1151363.pem (1338 bytes)
	W0328 22:06:29.182252 1338826 certs.go:480] ignoring /home/jenkins/minikube-integration/17877-1145955/.minikube/certs/1151363_empty.pem, impossibly tiny 0 bytes
	I0328 22:06:29.182267 1338826 certs.go:484] found cert: /home/jenkins/minikube-integration/17877-1145955/.minikube/certs/ca-key.pem (1679 bytes)
	I0328 22:06:29.182297 1338826 certs.go:484] found cert: /home/jenkins/minikube-integration/17877-1145955/.minikube/certs/ca.pem (1082 bytes)
	I0328 22:06:29.182355 1338826 certs.go:484] found cert: /home/jenkins/minikube-integration/17877-1145955/.minikube/certs/cert.pem (1123 bytes)
	I0328 22:06:29.182407 1338826 certs.go:484] found cert: /home/jenkins/minikube-integration/17877-1145955/.minikube/certs/key.pem (1679 bytes)
	I0328 22:06:29.182474 1338826 certs.go:484] found cert: /home/jenkins/minikube-integration/17877-1145955/.minikube/files/etc/ssl/certs/11513632.pem (1708 bytes)
	I0328 22:06:29.183123 1338826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17877-1145955/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0328 22:06:29.289591 1338826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17877-1145955/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0328 22:06:29.359403 1338826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17877-1145955/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0328 22:06:29.385940 1338826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17877-1145955/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0328 22:06:29.411504 1338826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/old-k8s-version-633693/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0328 22:06:29.437705 1338826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/old-k8s-version-633693/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0328 22:06:29.463799 1338826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/old-k8s-version-633693/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0328 22:06:29.489953 1338826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/old-k8s-version-633693/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0328 22:06:29.516028 1338826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17877-1145955/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0328 22:06:29.542283 1338826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17877-1145955/.minikube/certs/1151363.pem --> /usr/share/ca-certificates/1151363.pem (1338 bytes)
	I0328 22:06:29.568454 1338826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17877-1145955/.minikube/files/etc/ssl/certs/11513632.pem --> /usr/share/ca-certificates/11513632.pem (1708 bytes)
	I0328 22:06:29.593883 1338826 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0328 22:06:29.613283 1338826 ssh_runner.go:195] Run: openssl version
	I0328 22:06:29.618984 1338826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0328 22:06:29.629274 1338826 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0328 22:06:29.633033 1338826 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 28 21:12 /usr/share/ca-certificates/minikubeCA.pem
	I0328 22:06:29.633128 1338826 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0328 22:06:29.640130 1338826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0328 22:06:29.649644 1338826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1151363.pem && ln -fs /usr/share/ca-certificates/1151363.pem /etc/ssl/certs/1151363.pem"
	I0328 22:06:29.659725 1338826 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1151363.pem
	I0328 22:06:29.663581 1338826 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 28 21:20 /usr/share/ca-certificates/1151363.pem
	I0328 22:06:29.663692 1338826 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1151363.pem
	I0328 22:06:29.671035 1338826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1151363.pem /etc/ssl/certs/51391683.0"
	I0328 22:06:29.680921 1338826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11513632.pem && ln -fs /usr/share/ca-certificates/11513632.pem /etc/ssl/certs/11513632.pem"
	I0328 22:06:29.690830 1338826 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11513632.pem
	I0328 22:06:29.694795 1338826 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 28 21:20 /usr/share/ca-certificates/11513632.pem
	I0328 22:06:29.694937 1338826 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11513632.pem
	I0328 22:06:29.702366 1338826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11513632.pem /etc/ssl/certs/3ec20f2e.0"
	I0328 22:06:29.712274 1338826 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0328 22:06:29.716389 1338826 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0328 22:06:29.723623 1338826 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0328 22:06:29.731033 1338826 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0328 22:06:29.738151 1338826 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0328 22:06:29.745420 1338826 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0328 22:06:29.752519 1338826 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0328 22:06:29.759632 1338826 kubeadm.go:391] StartCluster: {Name:old-k8s-version-633693 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-633693 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/m
inikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 22:06:29.759780 1338826 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0328 22:06:29.759887 1338826 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 22:06:29.834024 1338826 cri.go:89] found id: ""
	I0328 22:06:29.834146 1338826 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0328 22:06:29.844585 1338826 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0328 22:06:29.844656 1338826 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0328 22:06:29.844674 1338826 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0328 22:06:29.844768 1338826 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0328 22:06:29.854686 1338826 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0328 22:06:29.855186 1338826 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-633693" does not appear in /home/jenkins/minikube-integration/17877-1145955/kubeconfig
	I0328 22:06:29.855367 1338826 kubeconfig.go:62] /home/jenkins/minikube-integration/17877-1145955/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-633693" cluster setting kubeconfig missing "old-k8s-version-633693" context setting]
	I0328 22:06:29.855776 1338826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17877-1145955/kubeconfig: {Name:mk01de9100d65131f49674a0d1051891ca674cd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 22:06:29.857241 1338826 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0328 22:06:29.866745 1338826 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.76.2
	I0328 22:06:29.866817 1338826 kubeadm.go:591] duration metric: took 22.125235ms to restartPrimaryControlPlane
	I0328 22:06:29.866841 1338826 kubeadm.go:393] duration metric: took 107.224232ms to StartCluster
	I0328 22:06:29.866899 1338826 settings.go:142] acquiring lock: {Name:mka22e5d6cd66b2677ac3cce373c1a6e13c189c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 22:06:29.866973 1338826 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17877-1145955/kubeconfig
	I0328 22:06:29.867680 1338826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17877-1145955/kubeconfig: {Name:mk01de9100d65131f49674a0d1051891ca674cd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 22:06:29.867946 1338826 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0328 22:06:29.870622 1338826 out.go:177] * Verifying Kubernetes components...
	I0328 22:06:29.868305 1338826 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0328 22:06:29.868395 1338826 config.go:182] Loaded profile config "old-k8s-version-633693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0328 22:06:29.872425 1338826 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 22:06:29.870849 1338826 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-633693"
	I0328 22:06:29.872626 1338826 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-633693"
	W0328 22:06:29.872637 1338826 addons.go:243] addon storage-provisioner should already be in state true
	I0328 22:06:29.872688 1338826 host.go:66] Checking if "old-k8s-version-633693" exists ...
	I0328 22:06:29.873103 1338826 cli_runner.go:164] Run: docker container inspect old-k8s-version-633693 --format={{.State.Status}}
	I0328 22:06:29.870859 1338826 addons.go:69] Setting dashboard=true in profile "old-k8s-version-633693"
	I0328 22:06:29.873376 1338826 addons.go:234] Setting addon dashboard=true in "old-k8s-version-633693"
	W0328 22:06:29.873399 1338826 addons.go:243] addon dashboard should already be in state true
	I0328 22:06:29.873447 1338826 host.go:66] Checking if "old-k8s-version-633693" exists ...
	I0328 22:06:29.873880 1338826 cli_runner.go:164] Run: docker container inspect old-k8s-version-633693 --format={{.State.Status}}
	I0328 22:06:29.870865 1338826 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-633693"
	I0328 22:06:29.874339 1338826 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-633693"
	I0328 22:06:29.874625 1338826 cli_runner.go:164] Run: docker container inspect old-k8s-version-633693 --format={{.State.Status}}
	I0328 22:06:29.870888 1338826 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-633693"
	I0328 22:06:29.875184 1338826 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-633693"
	W0328 22:06:29.875208 1338826 addons.go:243] addon metrics-server should already be in state true
	I0328 22:06:29.875258 1338826 host.go:66] Checking if "old-k8s-version-633693" exists ...
	I0328 22:06:29.875693 1338826 cli_runner.go:164] Run: docker container inspect old-k8s-version-633693 --format={{.State.Status}}
	I0328 22:06:29.925218 1338826 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0328 22:06:29.927367 1338826 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0328 22:06:29.932239 1338826 addons.go:426] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0328 22:06:29.932269 1338826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0328 22:06:29.932334 1338826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-633693
	I0328 22:06:29.928352 1338826 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-633693"
	W0328 22:06:29.932551 1338826 addons.go:243] addon default-storageclass should already be in state true
	I0328 22:06:29.932581 1338826 host.go:66] Checking if "old-k8s-version-633693" exists ...
	I0328 22:06:29.933003 1338826 cli_runner.go:164] Run: docker container inspect old-k8s-version-633693 --format={{.State.Status}}
	I0328 22:06:29.957640 1338826 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 22:06:29.960027 1338826 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 22:06:29.960072 1338826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0328 22:06:29.960282 1338826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-633693
	I0328 22:06:29.976133 1338826 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0328 22:06:29.983543 1338826 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0328 22:06:29.983572 1338826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0328 22:06:29.983655 1338826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-633693
	I0328 22:06:30.017743 1338826 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0328 22:06:30.017770 1338826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0328 22:06:30.017866 1338826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-633693
	I0328 22:06:30.024333 1338826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34554 SSHKeyPath:/home/jenkins/minikube-integration/17877-1145955/.minikube/machines/old-k8s-version-633693/id_rsa Username:docker}
	I0328 22:06:30.044422 1338826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34554 SSHKeyPath:/home/jenkins/minikube-integration/17877-1145955/.minikube/machines/old-k8s-version-633693/id_rsa Username:docker}
	I0328 22:06:30.065458 1338826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34554 SSHKeyPath:/home/jenkins/minikube-integration/17877-1145955/.minikube/machines/old-k8s-version-633693/id_rsa Username:docker}
	I0328 22:06:30.084256 1338826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34554 SSHKeyPath:/home/jenkins/minikube-integration/17877-1145955/.minikube/machines/old-k8s-version-633693/id_rsa Username:docker}
	I0328 22:06:30.159711 1338826 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 22:06:30.198067 1338826 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-633693" to be "Ready" ...
	I0328 22:06:30.222912 1338826 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0328 22:06:30.222947 1338826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0328 22:06:30.256679 1338826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 22:06:30.270784 1338826 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0328 22:06:30.270830 1338826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0328 22:06:30.311789 1338826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0328 22:06:30.340923 1338826 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0328 22:06:30.340955 1338826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0328 22:06:30.347313 1338826 addons.go:426] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0328 22:06:30.347354 1338826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0328 22:06:30.402844 1338826 addons.go:426] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0328 22:06:30.402866 1338826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0328 22:06:30.431025 1338826 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0328 22:06:30.431062 1338826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0328 22:06:30.433226 1338826 addons.go:426] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0328 22:06:30.433249 1338826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0328 22:06:30.457186 1338826 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 22:06:30.457212 1338826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0328 22:06:30.494064 1338826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 22:06:30.527122 1338826 addons.go:426] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0328 22:06:30.527199 1338826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	W0328 22:06:30.532679 1338826 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 22:06:30.532773 1338826 retry.go:31] will retry after 132.834426ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 22:06:30.581425 1338826 addons.go:426] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0328 22:06:30.581463 1338826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	W0328 22:06:30.592873 1338826 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 22:06:30.592908 1338826 retry.go:31] will retry after 183.397238ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 22:06:30.633895 1338826 addons.go:426] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0328 22:06:30.633948 1338826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	W0328 22:06:30.664032 1338826 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 22:06:30.664083 1338826 retry.go:31] will retry after 153.380004ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 22:06:30.666213 1338826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 22:06:30.693271 1338826 addons.go:426] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0328 22:06:30.693298 1338826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0328 22:06:30.714067 1338826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0328 22:06:30.777300 1338826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0328 22:06:30.818016 1338826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0328 22:06:30.867707 1338826 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 22:06:30.867751 1338826 retry.go:31] will retry after 335.391985ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0328 22:06:30.990044 1338826 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 22:06:30.990088 1338826 retry.go:31] will retry after 192.126137ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0328 22:06:31.077625 1338826 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 22:06:31.077669 1338826 retry.go:31] will retry after 407.221177ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0328 22:06:31.100245 1338826 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 22:06:31.100282 1338826 retry.go:31] will retry after 342.986579ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 22:06:31.182438 1338826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0328 22:06:31.204333 1338826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0328 22:06:31.325011 1338826 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 22:06:31.325093 1338826 retry.go:31] will retry after 379.697327ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0328 22:06:31.394782 1338826 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 22:06:31.394869 1338826 retry.go:31] will retry after 348.480792ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 22:06:31.444229 1338826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 22:06:31.485639 1338826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0328 22:06:31.560017 1338826 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 22:06:31.560105 1338826 retry.go:31] will retry after 631.077482ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0328 22:06:31.646476 1338826 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 22:06:31.646554 1338826 retry.go:31] will retry after 670.51397ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 22:06:31.705236 1338826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0328 22:06:31.743611 1338826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0328 22:06:31.888997 1338826 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 22:06:31.889071 1338826 retry.go:31] will retry after 512.688787ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0328 22:06:31.933020 1338826 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 22:06:31.933093 1338826 retry.go:31] will retry after 554.40876ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 22:06:32.191569 1338826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 22:06:32.199286 1338826 node_ready.go:53] error getting node "old-k8s-version-633693": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-633693": dial tcp 192.168.76.2:8443: connect: connection refused
	W0328 22:06:32.294118 1338826 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 22:06:32.294195 1338826 retry.go:31] will retry after 1.001863178s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 22:06:32.317528 1338826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0328 22:06:32.402693 1338826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0328 22:06:32.405024 1338826 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 22:06:32.405057 1338826 retry.go:31] will retry after 590.284878ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 22:06:32.488356 1338826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0328 22:06:32.500893 1338826 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 22:06:32.500937 1338826 retry.go:31] will retry after 974.32905ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0328 22:06:32.586583 1338826 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 22:06:32.586615 1338826 retry.go:31] will retry after 938.141295ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 22:06:32.995588 1338826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0328 22:06:33.090153 1338826 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 22:06:33.090187 1338826 retry.go:31] will retry after 1.564540504s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 22:06:33.297039 1338826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0328 22:06:33.406185 1338826 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 22:06:33.406226 1338826 retry.go:31] will retry after 1.615893783s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 22:06:33.475492 1338826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0328 22:06:33.525063 1338826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0328 22:06:33.575605 1338826 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 22:06:33.575654 1338826 retry.go:31] will retry after 1.483884902s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0328 22:06:33.654881 1338826 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 22:06:33.654927 1338826 retry.go:31] will retry after 2.072986848s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 22:06:34.199618 1338826 node_ready.go:53] error getting node "old-k8s-version-633693": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-633693": dial tcp 192.168.76.2:8443: connect: connection refused
	I0328 22:06:34.654952 1338826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0328 22:06:34.786056 1338826 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 22:06:34.786094 1338826 retry.go:31] will retry after 1.950803602s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 22:06:35.022341 1338826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 22:06:35.059845 1338826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0328 22:06:35.216581 1338826 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 22:06:35.216621 1338826 retry.go:31] will retry after 2.592482542s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0328 22:06:35.216690 1338826 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 22:06:35.216716 1338826 retry.go:31] will retry after 2.725049231s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 22:06:35.728851 1338826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0328 22:06:35.808082 1338826 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 22:06:35.808128 1338826 retry.go:31] will retry after 3.101242668s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 22:06:36.698678 1338826 node_ready.go:53] error getting node "old-k8s-version-633693": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-633693": dial tcp 192.168.76.2:8443: connect: connection refused
	I0328 22:06:36.737961 1338826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0328 22:06:36.817507 1338826 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 22:06:36.817540 1338826 retry.go:31] will retry after 3.152292884s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 22:06:37.809727 1338826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0328 22:06:37.881761 1338826 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 22:06:37.881794 1338826 retry.go:31] will retry after 2.024843406s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 22:06:37.941978 1338826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0328 22:06:38.026274 1338826 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 22:06:38.026313 1338826 retry.go:31] will retry after 4.246886334s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0328 22:06:38.910288 1338826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 22:06:39.907726 1338826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 22:06:39.970380 1338826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0328 22:06:42.274136 1338826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0328 22:06:47.750002 1338826 node_ready.go:49] node "old-k8s-version-633693" has status "Ready":"True"
	I0328 22:06:47.750028 1338826 node_ready.go:38] duration metric: took 17.551921336s for node "old-k8s-version-633693" to be "Ready" ...
	I0328 22:06:47.750038 1338826 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 22:06:48.006334 1338826 pod_ready.go:78] waiting up to 6m0s for pod "coredns-74ff55c5b-rq6t8" in "kube-system" namespace to be "Ready" ...
	I0328 22:06:48.885781 1338826 pod_ready.go:92] pod "coredns-74ff55c5b-rq6t8" in "kube-system" namespace has status "Ready":"True"
	I0328 22:06:48.885855 1338826 pod_ready.go:81] duration metric: took 879.428009ms for pod "coredns-74ff55c5b-rq6t8" in "kube-system" namespace to be "Ready" ...
	I0328 22:06:48.885882 1338826 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-633693" in "kube-system" namespace to be "Ready" ...
	I0328 22:06:49.434358 1338826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.524034886s)
	I0328 22:06:49.528607 1338826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (9.55818877s)
	I0328 22:06:49.528990 1338826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (9.621226595s)
	I0328 22:06:49.529042 1338826 addons.go:470] Verifying addon metrics-server=true in "old-k8s-version-633693"
	I0328 22:06:50.113778 1338826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.839573448s)
	I0328 22:06:50.117395 1338826 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-633693 addons enable metrics-server
	
	I0328 22:06:50.119986 1338826 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I0328 22:06:50.122010 1338826 addons.go:505] duration metric: took 20.253710385s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass dashboard]
	I0328 22:06:50.911018 1338826 pod_ready.go:102] pod "etcd-old-k8s-version-633693" in "kube-system" namespace has status "Ready":"False"
	I0328 22:06:53.406975 1338826 pod_ready.go:102] pod "etcd-old-k8s-version-633693" in "kube-system" namespace has status "Ready":"False"
	I0328 22:06:55.892196 1338826 pod_ready.go:102] pod "etcd-old-k8s-version-633693" in "kube-system" namespace has status "Ready":"False"
	I0328 22:06:57.895687 1338826 pod_ready.go:102] pod "etcd-old-k8s-version-633693" in "kube-system" namespace has status "Ready":"False"
	I0328 22:07:00.394968 1338826 pod_ready.go:102] pod "etcd-old-k8s-version-633693" in "kube-system" namespace has status "Ready":"False"
	I0328 22:07:02.892992 1338826 pod_ready.go:102] pod "etcd-old-k8s-version-633693" in "kube-system" namespace has status "Ready":"False"
	I0328 22:07:05.392200 1338826 pod_ready.go:102] pod "etcd-old-k8s-version-633693" in "kube-system" namespace has status "Ready":"False"
	I0328 22:07:07.400865 1338826 pod_ready.go:102] pod "etcd-old-k8s-version-633693" in "kube-system" namespace has status "Ready":"False"
	I0328 22:07:09.895160 1338826 pod_ready.go:102] pod "etcd-old-k8s-version-633693" in "kube-system" namespace has status "Ready":"False"
	I0328 22:07:12.393689 1338826 pod_ready.go:102] pod "etcd-old-k8s-version-633693" in "kube-system" namespace has status "Ready":"False"
	I0328 22:07:14.891542 1338826 pod_ready.go:102] pod "etcd-old-k8s-version-633693" in "kube-system" namespace has status "Ready":"False"
	I0328 22:07:16.894523 1338826 pod_ready.go:102] pod "etcd-old-k8s-version-633693" in "kube-system" namespace has status "Ready":"False"
	I0328 22:07:18.911624 1338826 pod_ready.go:102] pod "etcd-old-k8s-version-633693" in "kube-system" namespace has status "Ready":"False"
	I0328 22:07:21.405815 1338826 pod_ready.go:102] pod "etcd-old-k8s-version-633693" in "kube-system" namespace has status "Ready":"False"
	I0328 22:07:23.893456 1338826 pod_ready.go:102] pod "etcd-old-k8s-version-633693" in "kube-system" namespace has status "Ready":"False"
	I0328 22:07:25.897410 1338826 pod_ready.go:102] pod "etcd-old-k8s-version-633693" in "kube-system" namespace has status "Ready":"False"
	I0328 22:07:28.392082 1338826 pod_ready.go:102] pod "etcd-old-k8s-version-633693" in "kube-system" namespace has status "Ready":"False"
	I0328 22:07:30.393127 1338826 pod_ready.go:102] pod "etcd-old-k8s-version-633693" in "kube-system" namespace has status "Ready":"False"
	I0328 22:07:32.891448 1338826 pod_ready.go:102] pod "etcd-old-k8s-version-633693" in "kube-system" namespace has status "Ready":"False"
	I0328 22:07:34.895445 1338826 pod_ready.go:102] pod "etcd-old-k8s-version-633693" in "kube-system" namespace has status "Ready":"False"
	I0328 22:07:37.391764 1338826 pod_ready.go:102] pod "etcd-old-k8s-version-633693" in "kube-system" namespace has status "Ready":"False"
	I0328 22:07:39.392617 1338826 pod_ready.go:102] pod "etcd-old-k8s-version-633693" in "kube-system" namespace has status "Ready":"False"
	I0328 22:07:41.891599 1338826 pod_ready.go:102] pod "etcd-old-k8s-version-633693" in "kube-system" namespace has status "Ready":"False"
	I0328 22:07:43.891771 1338826 pod_ready.go:102] pod "etcd-old-k8s-version-633693" in "kube-system" namespace has status "Ready":"False"
	I0328 22:07:46.391837 1338826 pod_ready.go:102] pod "etcd-old-k8s-version-633693" in "kube-system" namespace has status "Ready":"False"
	I0328 22:07:48.393916 1338826 pod_ready.go:102] pod "etcd-old-k8s-version-633693" in "kube-system" namespace has status "Ready":"False"
	I0328 22:07:50.891892 1338826 pod_ready.go:102] pod "etcd-old-k8s-version-633693" in "kube-system" namespace has status "Ready":"False"
	I0328 22:07:52.893756 1338826 pod_ready.go:102] pod "etcd-old-k8s-version-633693" in "kube-system" namespace has status "Ready":"False"
	I0328 22:07:55.391654 1338826 pod_ready.go:102] pod "etcd-old-k8s-version-633693" in "kube-system" namespace has status "Ready":"False"
	I0328 22:07:57.393698 1338826 pod_ready.go:102] pod "etcd-old-k8s-version-633693" in "kube-system" namespace has status "Ready":"False"
	I0328 22:07:59.891612 1338826 pod_ready.go:102] pod "etcd-old-k8s-version-633693" in "kube-system" namespace has status "Ready":"False"
	I0328 22:08:02.393850 1338826 pod_ready.go:102] pod "etcd-old-k8s-version-633693" in "kube-system" namespace has status "Ready":"False"
	I0328 22:08:04.892201 1338826 pod_ready.go:92] pod "etcd-old-k8s-version-633693" in "kube-system" namespace has status "Ready":"True"
	I0328 22:08:04.892227 1338826 pod_ready.go:81] duration metric: took 1m16.006325351s for pod "etcd-old-k8s-version-633693" in "kube-system" namespace to be "Ready" ...
	I0328 22:08:04.892259 1338826 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-633693" in "kube-system" namespace to be "Ready" ...
	I0328 22:08:04.897466 1338826 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-633693" in "kube-system" namespace has status "Ready":"True"
	I0328 22:08:04.897491 1338826 pod_ready.go:81] duration metric: took 5.218747ms for pod "kube-apiserver-old-k8s-version-633693" in "kube-system" namespace to be "Ready" ...
	I0328 22:08:04.897503 1338826 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-633693" in "kube-system" namespace to be "Ready" ...
	I0328 22:08:06.904157 1338826 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-633693" in "kube-system" namespace has status "Ready":"False"
	I0328 22:08:09.403663 1338826 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-633693" in "kube-system" namespace has status "Ready":"False"
	I0328 22:08:11.405298 1338826 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-633693" in "kube-system" namespace has status "Ready":"False"
	I0328 22:08:13.904187 1338826 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-633693" in "kube-system" namespace has status "Ready":"False"
	I0328 22:08:16.403711 1338826 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-633693" in "kube-system" namespace has status "Ready":"False"
	I0328 22:08:17.909294 1338826 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-633693" in "kube-system" namespace has status "Ready":"True"
	I0328 22:08:17.909370 1338826 pod_ready.go:81] duration metric: took 13.011857211s for pod "kube-controller-manager-old-k8s-version-633693" in "kube-system" namespace to be "Ready" ...
	I0328 22:08:17.909397 1338826 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9vs8r" in "kube-system" namespace to be "Ready" ...
	I0328 22:08:17.918637 1338826 pod_ready.go:92] pod "kube-proxy-9vs8r" in "kube-system" namespace has status "Ready":"True"
	I0328 22:08:17.918672 1338826 pod_ready.go:81] duration metric: took 9.258655ms for pod "kube-proxy-9vs8r" in "kube-system" namespace to be "Ready" ...
	I0328 22:08:17.918685 1338826 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-633693" in "kube-system" namespace to be "Ready" ...
	I0328 22:08:17.927105 1338826 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-633693" in "kube-system" namespace has status "Ready":"True"
	I0328 22:08:17.927130 1338826 pod_ready.go:81] duration metric: took 8.437573ms for pod "kube-scheduler-old-k8s-version-633693" in "kube-system" namespace to be "Ready" ...
	I0328 22:08:17.927142 1338826 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace to be "Ready" ...
	I0328 22:08:19.942294 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:08:22.434712 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:08:24.435112 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:08:26.933676 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:08:28.935233 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:08:31.434711 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:08:33.932938 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:08:35.933182 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:08:37.933531 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:08:40.434985 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:08:42.932957 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:08:44.933746 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:08:47.432638 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:08:49.433249 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:08:51.433961 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:08:53.434580 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:08:55.932722 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:08:57.933867 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:00.434560 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:02.932551 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:04.932913 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:07.433550 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:09.433637 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:11.933587 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:14.434340 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:16.434764 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:18.932404 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:20.933066 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:22.938118 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:25.432907 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:27.433591 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:29.933701 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:32.433501 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:34.932827 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:36.934724 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:39.433686 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:41.434345 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:43.434422 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:45.932966 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:47.933007 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:49.933857 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:52.433046 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:54.434259 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:56.933632 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:58.933976 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:10:01.434400 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:10:03.434444 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:10:05.932668 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:10:07.934860 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:10:09.935369 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:10:12.434151 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:10:14.933434 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:10:16.934278 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:10:19.434252 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:10:21.434530 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:10:23.933010 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:10:26.433027 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:10:28.433948 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:10:30.933738 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:10:32.933910 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:10:35.432932 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:10:37.434118 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:10:39.441245 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:10:41.932963 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:10:43.933931 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:10:46.434034 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:10:48.933457 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:10:51.432895 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:10:53.434727 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:10:55.440622 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:10:57.933693 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:00.435477 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:02.932933 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:04.933833 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:07.433346 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:09.433437 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:11.933095 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:13.933603 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:16.433215 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:18.933087 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:20.933409 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:23.434532 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:25.434708 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:27.932941 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:29.933456 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:31.933872 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:34.433180 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:36.433308 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:38.433890 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:40.434164 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:42.932888 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:44.932980 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:46.933414 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:49.432985 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:51.433663 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:53.434707 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:55.933135 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:57.933821 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:12:00.434788 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:12:02.933354 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:12:04.934434 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:12:07.433060 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:12:09.433317 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:12:11.433605 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:12:13.437226 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:12:15.933636 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:12:17.933340 1338826 pod_ready.go:81] duration metric: took 4m0.006183924s for pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace to be "Ready" ...
	E0328 22:12:17.933366 1338826 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0328 22:12:17.933376 1338826 pod_ready.go:38] duration metric: took 5m30.183327024s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 22:12:17.933390 1338826 api_server.go:52] waiting for apiserver process to appear ...
	I0328 22:12:17.933418 1338826 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 22:12:17.933480 1338826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 22:12:17.975520 1338826 cri.go:89] found id: "aafa2b2860b7d02f5f63f8f79e578efc4b8a612e8845f3c01b98f89c881a05f5"
	I0328 22:12:17.975543 1338826 cri.go:89] found id: ""
	I0328 22:12:17.975551 1338826 logs.go:276] 1 containers: [aafa2b2860b7d02f5f63f8f79e578efc4b8a612e8845f3c01b98f89c881a05f5]
	I0328 22:12:17.975605 1338826 ssh_runner.go:195] Run: which crictl
	I0328 22:12:17.978975 1338826 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 22:12:17.979046 1338826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 22:12:18.021754 1338826 cri.go:89] found id: "54454ca60825d1d8afde9d3954763066222f962cd282dd4ae5776677f1d02263"
	I0328 22:12:18.021779 1338826 cri.go:89] found id: ""
	I0328 22:12:18.021787 1338826 logs.go:276] 1 containers: [54454ca60825d1d8afde9d3954763066222f962cd282dd4ae5776677f1d02263]
	I0328 22:12:18.021845 1338826 ssh_runner.go:195] Run: which crictl
	I0328 22:12:18.025748 1338826 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 22:12:18.025846 1338826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 22:12:18.069778 1338826 cri.go:89] found id: "9a5cc20f063ad7f3789223916297cb6dbac2af4ce58e82d5bde29f4655036962"
	I0328 22:12:18.069809 1338826 cri.go:89] found id: ""
	I0328 22:12:18.069819 1338826 logs.go:276] 1 containers: [9a5cc20f063ad7f3789223916297cb6dbac2af4ce58e82d5bde29f4655036962]
	I0328 22:12:18.069881 1338826 ssh_runner.go:195] Run: which crictl
	I0328 22:12:18.073970 1338826 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 22:12:18.074054 1338826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 22:12:18.116221 1338826 cri.go:89] found id: "0c1c9dee5f4ce24a9e91181634895b593f2aec0d21a11adbe16ddd1adce82d0b"
	I0328 22:12:18.116246 1338826 cri.go:89] found id: ""
	I0328 22:12:18.116254 1338826 logs.go:276] 1 containers: [0c1c9dee5f4ce24a9e91181634895b593f2aec0d21a11adbe16ddd1adce82d0b]
	I0328 22:12:18.116315 1338826 ssh_runner.go:195] Run: which crictl
	I0328 22:12:18.120360 1338826 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 22:12:18.120438 1338826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 22:12:18.160651 1338826 cri.go:89] found id: "99963dd8c6223acf3495ce9e9c35c7f4c45a97d49ba1435654dd656d82537a95"
	I0328 22:12:18.160672 1338826 cri.go:89] found id: ""
	I0328 22:12:18.160681 1338826 logs.go:276] 1 containers: [99963dd8c6223acf3495ce9e9c35c7f4c45a97d49ba1435654dd656d82537a95]
	I0328 22:12:18.160741 1338826 ssh_runner.go:195] Run: which crictl
	I0328 22:12:18.164797 1338826 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 22:12:18.164870 1338826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 22:12:18.208058 1338826 cri.go:89] found id: "accf5dfc23ea01d34e6a9b748711dcadd8f2af898f72bbc4348b539768308591"
	I0328 22:12:18.208165 1338826 cri.go:89] found id: ""
	I0328 22:12:18.208192 1338826 logs.go:276] 1 containers: [accf5dfc23ea01d34e6a9b748711dcadd8f2af898f72bbc4348b539768308591]
	I0328 22:12:18.208295 1338826 ssh_runner.go:195] Run: which crictl
	I0328 22:12:18.212186 1338826 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 22:12:18.212308 1338826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 22:12:18.257286 1338826 cri.go:89] found id: "f410c25189f5f56842f0d6f5e959b565b53ee6a8d5258db914882d1e38e9ab2b"
	I0328 22:12:18.257353 1338826 cri.go:89] found id: ""
	I0328 22:12:18.257367 1338826 logs.go:276] 1 containers: [f410c25189f5f56842f0d6f5e959b565b53ee6a8d5258db914882d1e38e9ab2b]
	I0328 22:12:18.257425 1338826 ssh_runner.go:195] Run: which crictl
	I0328 22:12:18.263454 1338826 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 22:12:18.263535 1338826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 22:12:18.301918 1338826 cri.go:89] found id: "dcb78e1b7465099ed99741a848314cba3d027ecfab8e44b022dc85231ab9a26d"
	I0328 22:12:18.301941 1338826 cri.go:89] found id: ""
	I0328 22:12:18.301949 1338826 logs.go:276] 1 containers: [dcb78e1b7465099ed99741a848314cba3d027ecfab8e44b022dc85231ab9a26d]
	I0328 22:12:18.302004 1338826 ssh_runner.go:195] Run: which crictl
	I0328 22:12:18.305551 1338826 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0328 22:12:18.305626 1338826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0328 22:12:18.346240 1338826 cri.go:89] found id: "dbebe9c9214b8446dd7e1ba58af2dcdda839029f75a4df5d16fce02680f0f723"
	I0328 22:12:18.346264 1338826 cri.go:89] found id: "de9034878b9e7c8d2e40e141b1c21f2a76586667af994d8465e9601d126955e0"
	I0328 22:12:18.346269 1338826 cri.go:89] found id: ""
	I0328 22:12:18.346277 1338826 logs.go:276] 2 containers: [dbebe9c9214b8446dd7e1ba58af2dcdda839029f75a4df5d16fce02680f0f723 de9034878b9e7c8d2e40e141b1c21f2a76586667af994d8465e9601d126955e0]
	I0328 22:12:18.346333 1338826 ssh_runner.go:195] Run: which crictl
	I0328 22:12:18.349950 1338826 ssh_runner.go:195] Run: which crictl
	I0328 22:12:18.353205 1338826 logs.go:123] Gathering logs for kubelet ...
	I0328 22:12:18.353227 1338826 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0328 22:12:18.405413 1338826 logs.go:138] Found kubelet problem: Mar 28 22:06:47 old-k8s-version-633693 kubelet[742]: E0328 22:06:47.826315     742 reflector.go:138] object-"kube-system"/"storage-provisioner-token-htmqq": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-htmqq" is forbidden: User "system:node:old-k8s-version-633693" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-633693' and this object
	W0328 22:12:18.405657 1338826 logs.go:138] Found kubelet problem: Mar 28 22:06:47 old-k8s-version-633693 kubelet[742]: E0328 22:06:47.826559     742 reflector.go:138] object-"default"/"default-token-skqvg": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-skqvg" is forbidden: User "system:node:old-k8s-version-633693" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-633693' and this object
	W0328 22:12:18.405882 1338826 logs.go:138] Found kubelet problem: Mar 28 22:06:47 old-k8s-version-633693 kubelet[742]: E0328 22:06:47.826733     742 reflector.go:138] object-"kube-system"/"metrics-server-token-qkmwr": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-qkmwr" is forbidden: User "system:node:old-k8s-version-633693" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-633693' and this object
	W0328 22:12:18.406094 1338826 logs.go:138] Found kubelet problem: Mar 28 22:06:47 old-k8s-version-633693 kubelet[742]: E0328 22:06:47.829336     742 reflector.go:138] object-"kube-system"/"coredns-token-zjvsj": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-zjvsj" is forbidden: User "system:node:old-k8s-version-633693" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-633693' and this object
	W0328 22:12:18.406306 1338826 logs.go:138] Found kubelet problem: Mar 28 22:06:47 old-k8s-version-633693 kubelet[742]: E0328 22:06:47.829576     742 reflector.go:138] object-"kube-system"/"kindnet-token-g4wkb": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-g4wkb" is forbidden: User "system:node:old-k8s-version-633693" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-633693' and this object
	W0328 22:12:18.406507 1338826 logs.go:138] Found kubelet problem: Mar 28 22:06:47 old-k8s-version-633693 kubelet[742]: E0328 22:06:47.829784     742 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-633693" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-633693' and this object
	W0328 22:12:18.406719 1338826 logs.go:138] Found kubelet problem: Mar 28 22:06:47 old-k8s-version-633693 kubelet[742]: E0328 22:06:47.829969     742 reflector.go:138] object-"kube-system"/"kube-proxy-token-l4tct": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-l4tct" is forbidden: User "system:node:old-k8s-version-633693" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-633693' and this object
	W0328 22:12:18.406925 1338826 logs.go:138] Found kubelet problem: Mar 28 22:06:47 old-k8s-version-633693 kubelet[742]: E0328 22:06:47.830130     742 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-633693" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-633693' and this object
	W0328 22:12:18.415925 1338826 logs.go:138] Found kubelet problem: Mar 28 22:06:50 old-k8s-version-633693 kubelet[742]: E0328 22:06:50.527663     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0328 22:12:18.416174 1338826 logs.go:138] Found kubelet problem: Mar 28 22:06:50 old-k8s-version-633693 kubelet[742]: E0328 22:06:50.814044     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:18.418221 1338826 logs.go:138] Found kubelet problem: Mar 28 22:07:04 old-k8s-version-633693 kubelet[742]: E0328 22:07:04.699349     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0328 22:12:18.418545 1338826 logs.go:138] Found kubelet problem: Mar 28 22:07:07 old-k8s-version-633693 kubelet[742]: E0328 22:07:07.707146     742 reflector.go:138] object-"kubernetes-dashboard"/"kubernetes-dashboard-token-kddb6": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kubernetes-dashboard-token-kddb6" is forbidden: User "system:node:old-k8s-version-633693" cannot list resource "secrets" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'old-k8s-version-633693' and this object
	W0328 22:12:18.421763 1338826 logs.go:138] Found kubelet problem: Mar 28 22:07:15 old-k8s-version-633693 kubelet[742]: E0328 22:07:15.934579     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:18.422221 1338826 logs.go:138] Found kubelet problem: Mar 28 22:07:16 old-k8s-version-633693 kubelet[742]: E0328 22:07:16.937591     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:18.422406 1338826 logs.go:138] Found kubelet problem: Mar 28 22:07:19 old-k8s-version-633693 kubelet[742]: E0328 22:07:19.733048     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:18.422861 1338826 logs.go:138] Found kubelet problem: Mar 28 22:07:22 old-k8s-version-633693 kubelet[742]: E0328 22:07:22.808225     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:18.424949 1338826 logs.go:138] Found kubelet problem: Mar 28 22:07:34 old-k8s-version-633693 kubelet[742]: E0328 22:07:34.698485     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0328 22:12:18.425546 1338826 logs.go:138] Found kubelet problem: Mar 28 22:07:36 old-k8s-version-633693 kubelet[742]: E0328 22:07:36.977259     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:18.425874 1338826 logs.go:138] Found kubelet problem: Mar 28 22:07:42 old-k8s-version-633693 kubelet[742]: E0328 22:07:42.808747     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:18.426059 1338826 logs.go:138] Found kubelet problem: Mar 28 22:07:48 old-k8s-version-633693 kubelet[742]: E0328 22:07:48.688107     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:18.426384 1338826 logs.go:138] Found kubelet problem: Mar 28 22:07:54 old-k8s-version-633693 kubelet[742]: E0328 22:07:54.687538     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:18.426568 1338826 logs.go:138] Found kubelet problem: Mar 28 22:08:02 old-k8s-version-633693 kubelet[742]: E0328 22:08:02.688287     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:18.427151 1338826 logs.go:138] Found kubelet problem: Mar 28 22:08:09 old-k8s-version-633693 kubelet[742]: E0328 22:08:09.025390     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:18.427477 1338826 logs.go:138] Found kubelet problem: Mar 28 22:08:12 old-k8s-version-633693 kubelet[742]: E0328 22:08:12.808271     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:18.427660 1338826 logs.go:138] Found kubelet problem: Mar 28 22:08:14 old-k8s-version-633693 kubelet[742]: E0328 22:08:14.687911     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:18.429727 1338826 logs.go:138] Found kubelet problem: Mar 28 22:08:25 old-k8s-version-633693 kubelet[742]: E0328 22:08:25.704436     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0328 22:12:18.430054 1338826 logs.go:138] Found kubelet problem: Mar 28 22:08:27 old-k8s-version-633693 kubelet[742]: E0328 22:08:27.687797     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:18.430237 1338826 logs.go:138] Found kubelet problem: Mar 28 22:08:39 old-k8s-version-633693 kubelet[742]: E0328 22:08:39.688581     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:18.430562 1338826 logs.go:138] Found kubelet problem: Mar 28 22:08:41 old-k8s-version-633693 kubelet[742]: E0328 22:08:41.687550     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:18.430787 1338826 logs.go:138] Found kubelet problem: Mar 28 22:08:51 old-k8s-version-633693 kubelet[742]: E0328 22:08:51.689572     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:18.431378 1338826 logs.go:138] Found kubelet problem: Mar 28 22:08:55 old-k8s-version-633693 kubelet[742]: E0328 22:08:55.094326     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:18.431561 1338826 logs.go:138] Found kubelet problem: Mar 28 22:09:02 old-k8s-version-633693 kubelet[742]: E0328 22:09:02.687902     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:18.431886 1338826 logs.go:138] Found kubelet problem: Mar 28 22:09:02 old-k8s-version-633693 kubelet[742]: E0328 22:09:02.808415     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:18.432226 1338826 logs.go:138] Found kubelet problem: Mar 28 22:09:13 old-k8s-version-633693 kubelet[742]: E0328 22:09:13.689276     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:18.432413 1338826 logs.go:138] Found kubelet problem: Mar 28 22:09:14 old-k8s-version-633693 kubelet[742]: E0328 22:09:14.688165     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:18.432741 1338826 logs.go:138] Found kubelet problem: Mar 28 22:09:24 old-k8s-version-633693 kubelet[742]: E0328 22:09:24.687486     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:18.432928 1338826 logs.go:138] Found kubelet problem: Mar 28 22:09:29 old-k8s-version-633693 kubelet[742]: E0328 22:09:29.688477     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:18.433254 1338826 logs.go:138] Found kubelet problem: Mar 28 22:09:35 old-k8s-version-633693 kubelet[742]: E0328 22:09:35.687479     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:18.433952 1338826 logs.go:138] Found kubelet problem: Mar 28 22:09:40 old-k8s-version-633693 kubelet[742]: E0328 22:09:40.688238     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:18.434277 1338826 logs.go:138] Found kubelet problem: Mar 28 22:09:48 old-k8s-version-633693 kubelet[742]: E0328 22:09:48.687433     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:18.436363 1338826 logs.go:138] Found kubelet problem: Mar 28 22:09:51 old-k8s-version-633693 kubelet[742]: E0328 22:09:51.697009     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0328 22:12:18.436691 1338826 logs.go:138] Found kubelet problem: Mar 28 22:10:02 old-k8s-version-633693 kubelet[742]: E0328 22:10:02.687437     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:18.436876 1338826 logs.go:138] Found kubelet problem: Mar 28 22:10:03 old-k8s-version-633693 kubelet[742]: E0328 22:10:03.688247     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:18.437462 1338826 logs.go:138] Found kubelet problem: Mar 28 22:10:16 old-k8s-version-633693 kubelet[742]: E0328 22:10:16.214323     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:18.437646 1338826 logs.go:138] Found kubelet problem: Mar 28 22:10:16 old-k8s-version-633693 kubelet[742]: E0328 22:10:16.688145     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:18.437971 1338826 logs.go:138] Found kubelet problem: Mar 28 22:10:22 old-k8s-version-633693 kubelet[742]: E0328 22:10:22.808176     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:18.438155 1338826 logs.go:138] Found kubelet problem: Mar 28 22:10:29 old-k8s-version-633693 kubelet[742]: E0328 22:10:29.688127     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:18.438480 1338826 logs.go:138] Found kubelet problem: Mar 28 22:10:33 old-k8s-version-633693 kubelet[742]: E0328 22:10:33.687521     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:18.438666 1338826 logs.go:138] Found kubelet problem: Mar 28 22:10:40 old-k8s-version-633693 kubelet[742]: E0328 22:10:40.688295     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:18.438993 1338826 logs.go:138] Found kubelet problem: Mar 28 22:10:46 old-k8s-version-633693 kubelet[742]: E0328 22:10:46.687410     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:18.439177 1338826 logs.go:138] Found kubelet problem: Mar 28 22:10:52 old-k8s-version-633693 kubelet[742]: E0328 22:10:52.688204     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:18.439502 1338826 logs.go:138] Found kubelet problem: Mar 28 22:10:57 old-k8s-version-633693 kubelet[742]: E0328 22:10:57.687778     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:18.439686 1338826 logs.go:138] Found kubelet problem: Mar 28 22:11:07 old-k8s-version-633693 kubelet[742]: E0328 22:11:07.688569     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:18.440010 1338826 logs.go:138] Found kubelet problem: Mar 28 22:11:10 old-k8s-version-633693 kubelet[742]: E0328 22:11:10.687505     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:18.440203 1338826 logs.go:138] Found kubelet problem: Mar 28 22:11:21 old-k8s-version-633693 kubelet[742]: E0328 22:11:21.688528     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:18.440529 1338826 logs.go:138] Found kubelet problem: Mar 28 22:11:25 old-k8s-version-633693 kubelet[742]: E0328 22:11:25.687833     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:18.440714 1338826 logs.go:138] Found kubelet problem: Mar 28 22:11:36 old-k8s-version-633693 kubelet[742]: E0328 22:11:36.687961     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:18.441289 1338826 logs.go:138] Found kubelet problem: Mar 28 22:11:39 old-k8s-version-633693 kubelet[742]: E0328 22:11:39.687565     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:18.441475 1338826 logs.go:138] Found kubelet problem: Mar 28 22:11:51 old-k8s-version-633693 kubelet[742]: E0328 22:11:51.689147     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:18.441800 1338826 logs.go:138] Found kubelet problem: Mar 28 22:11:54 old-k8s-version-633693 kubelet[742]: E0328 22:11:54.687465     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:18.441983 1338826 logs.go:138] Found kubelet problem: Mar 28 22:12:02 old-k8s-version-633693 kubelet[742]: E0328 22:12:02.688202     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:18.442307 1338826 logs.go:138] Found kubelet problem: Mar 28 22:12:06 old-k8s-version-633693 kubelet[742]: E0328 22:12:06.687449     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:18.442634 1338826 logs.go:138] Found kubelet problem: Mar 28 22:12:17 old-k8s-version-633693 kubelet[742]: E0328 22:12:17.687727     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:18.442816 1338826 logs.go:138] Found kubelet problem: Mar 28 22:12:17 old-k8s-version-633693 kubelet[742]: E0328 22:12:17.689382     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0328 22:12:18.442826 1338826 logs.go:123] Gathering logs for describe nodes ...
	I0328 22:12:18.442840 1338826 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 22:12:18.610600 1338826 logs.go:123] Gathering logs for coredns [9a5cc20f063ad7f3789223916297cb6dbac2af4ce58e82d5bde29f4655036962] ...
	I0328 22:12:18.610629 1338826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a5cc20f063ad7f3789223916297cb6dbac2af4ce58e82d5bde29f4655036962"
	I0328 22:12:18.651719 1338826 logs.go:123] Gathering logs for storage-provisioner [de9034878b9e7c8d2e40e141b1c21f2a76586667af994d8465e9601d126955e0] ...
	I0328 22:12:18.651747 1338826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de9034878b9e7c8d2e40e141b1c21f2a76586667af994d8465e9601d126955e0"
	I0328 22:12:18.692899 1338826 logs.go:123] Gathering logs for container status ...
	I0328 22:12:18.692993 1338826 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 22:12:18.756410 1338826 logs.go:123] Gathering logs for kube-apiserver [aafa2b2860b7d02f5f63f8f79e578efc4b8a612e8845f3c01b98f89c881a05f5] ...
	I0328 22:12:18.756441 1338826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aafa2b2860b7d02f5f63f8f79e578efc4b8a612e8845f3c01b98f89c881a05f5"
	I0328 22:12:18.825430 1338826 logs.go:123] Gathering logs for kube-controller-manager [accf5dfc23ea01d34e6a9b748711dcadd8f2af898f72bbc4348b539768308591] ...
	I0328 22:12:18.825468 1338826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 accf5dfc23ea01d34e6a9b748711dcadd8f2af898f72bbc4348b539768308591"
	I0328 22:12:18.915204 1338826 logs.go:123] Gathering logs for kindnet [f410c25189f5f56842f0d6f5e959b565b53ee6a8d5258db914882d1e38e9ab2b] ...
	I0328 22:12:18.915239 1338826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f410c25189f5f56842f0d6f5e959b565b53ee6a8d5258db914882d1e38e9ab2b"
	I0328 22:12:18.970831 1338826 logs.go:123] Gathering logs for kube-proxy [99963dd8c6223acf3495ce9e9c35c7f4c45a97d49ba1435654dd656d82537a95] ...
	I0328 22:12:18.970914 1338826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 99963dd8c6223acf3495ce9e9c35c7f4c45a97d49ba1435654dd656d82537a95"
	I0328 22:12:19.019338 1338826 logs.go:123] Gathering logs for storage-provisioner [dbebe9c9214b8446dd7e1ba58af2dcdda839029f75a4df5d16fce02680f0f723] ...
	I0328 22:12:19.019365 1338826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbebe9c9214b8446dd7e1ba58af2dcdda839029f75a4df5d16fce02680f0f723"
	I0328 22:12:19.064283 1338826 logs.go:123] Gathering logs for kubernetes-dashboard [dcb78e1b7465099ed99741a848314cba3d027ecfab8e44b022dc85231ab9a26d] ...
	I0328 22:12:19.064313 1338826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dcb78e1b7465099ed99741a848314cba3d027ecfab8e44b022dc85231ab9a26d"
	I0328 22:12:19.104471 1338826 logs.go:123] Gathering logs for CRI-O ...
	I0328 22:12:19.104499 1338826 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 22:12:19.184921 1338826 logs.go:123] Gathering logs for dmesg ...
	I0328 22:12:19.184961 1338826 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 22:12:19.203985 1338826 logs.go:123] Gathering logs for etcd [54454ca60825d1d8afde9d3954763066222f962cd282dd4ae5776677f1d02263] ...
	I0328 22:12:19.204137 1338826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54454ca60825d1d8afde9d3954763066222f962cd282dd4ae5776677f1d02263"
	I0328 22:12:19.250334 1338826 logs.go:123] Gathering logs for kube-scheduler [0c1c9dee5f4ce24a9e91181634895b593f2aec0d21a11adbe16ddd1adce82d0b] ...
	I0328 22:12:19.250368 1338826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c1c9dee5f4ce24a9e91181634895b593f2aec0d21a11adbe16ddd1adce82d0b"
	I0328 22:12:19.293923 1338826 out.go:304] Setting ErrFile to fd 2...
	I0328 22:12:19.293952 1338826 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0328 22:12:19.294030 1338826 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0328 22:12:19.294045 1338826 out.go:239]   Mar 28 22:11:54 old-k8s-version-633693 kubelet[742]: E0328 22:11:54.687465     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	  Mar 28 22:11:54 old-k8s-version-633693 kubelet[742]: E0328 22:11:54.687465     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:19.294087 1338826 out.go:239]   Mar 28 22:12:02 old-k8s-version-633693 kubelet[742]: E0328 22:12:02.688202     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Mar 28 22:12:02 old-k8s-version-633693 kubelet[742]: E0328 22:12:02.688202     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:19.294098 1338826 out.go:239]   Mar 28 22:12:06 old-k8s-version-633693 kubelet[742]: E0328 22:12:06.687449     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	  Mar 28 22:12:06 old-k8s-version-633693 kubelet[742]: E0328 22:12:06.687449     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:19.294105 1338826 out.go:239]   Mar 28 22:12:17 old-k8s-version-633693 kubelet[742]: E0328 22:12:17.687727     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	  Mar 28 22:12:17 old-k8s-version-633693 kubelet[742]: E0328 22:12:17.687727     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:19.294114 1338826 out.go:239]   Mar 28 22:12:17 old-k8s-version-633693 kubelet[742]: E0328 22:12:17.689382     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Mar 28 22:12:17 old-k8s-version-633693 kubelet[742]: E0328 22:12:17.689382     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0328 22:12:19.294121 1338826 out.go:304] Setting ErrFile to fd 2...
	I0328 22:12:19.294128 1338826 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 22:12:29.295105 1338826 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 22:12:29.306579 1338826 api_server.go:72] duration metric: took 5m59.438575826s to wait for apiserver process to appear ...
	I0328 22:12:29.306606 1338826 api_server.go:88] waiting for apiserver healthz status ...
	I0328 22:12:29.306641 1338826 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 22:12:29.306705 1338826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 22:12:29.345775 1338826 cri.go:89] found id: "aafa2b2860b7d02f5f63f8f79e578efc4b8a612e8845f3c01b98f89c881a05f5"
	I0328 22:12:29.345797 1338826 cri.go:89] found id: ""
	I0328 22:12:29.345805 1338826 logs.go:276] 1 containers: [aafa2b2860b7d02f5f63f8f79e578efc4b8a612e8845f3c01b98f89c881a05f5]
	I0328 22:12:29.345862 1338826 ssh_runner.go:195] Run: which crictl
	I0328 22:12:29.349735 1338826 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 22:12:29.349812 1338826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 22:12:29.389469 1338826 cri.go:89] found id: "54454ca60825d1d8afde9d3954763066222f962cd282dd4ae5776677f1d02263"
	I0328 22:12:29.389500 1338826 cri.go:89] found id: ""
	I0328 22:12:29.389510 1338826 logs.go:276] 1 containers: [54454ca60825d1d8afde9d3954763066222f962cd282dd4ae5776677f1d02263]
	I0328 22:12:29.389587 1338826 ssh_runner.go:195] Run: which crictl
	I0328 22:12:29.393252 1338826 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 22:12:29.393335 1338826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 22:12:29.435046 1338826 cri.go:89] found id: "9a5cc20f063ad7f3789223916297cb6dbac2af4ce58e82d5bde29f4655036962"
	I0328 22:12:29.435068 1338826 cri.go:89] found id: ""
	I0328 22:12:29.435076 1338826 logs.go:276] 1 containers: [9a5cc20f063ad7f3789223916297cb6dbac2af4ce58e82d5bde29f4655036962]
	I0328 22:12:29.435134 1338826 ssh_runner.go:195] Run: which crictl
	I0328 22:12:29.439053 1338826 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 22:12:29.439135 1338826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 22:12:29.478063 1338826 cri.go:89] found id: "0c1c9dee5f4ce24a9e91181634895b593f2aec0d21a11adbe16ddd1adce82d0b"
	I0328 22:12:29.478084 1338826 cri.go:89] found id: ""
	I0328 22:12:29.478092 1338826 logs.go:276] 1 containers: [0c1c9dee5f4ce24a9e91181634895b593f2aec0d21a11adbe16ddd1adce82d0b]
	I0328 22:12:29.478148 1338826 ssh_runner.go:195] Run: which crictl
	I0328 22:12:29.481825 1338826 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 22:12:29.481896 1338826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 22:12:29.524721 1338826 cri.go:89] found id: "99963dd8c6223acf3495ce9e9c35c7f4c45a97d49ba1435654dd656d82537a95"
	I0328 22:12:29.524784 1338826 cri.go:89] found id: ""
	I0328 22:12:29.524806 1338826 logs.go:276] 1 containers: [99963dd8c6223acf3495ce9e9c35c7f4c45a97d49ba1435654dd656d82537a95]
	I0328 22:12:29.524879 1338826 ssh_runner.go:195] Run: which crictl
	I0328 22:12:29.528593 1338826 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 22:12:29.528672 1338826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 22:12:29.570444 1338826 cri.go:89] found id: "accf5dfc23ea01d34e6a9b748711dcadd8f2af898f72bbc4348b539768308591"
	I0328 22:12:29.570465 1338826 cri.go:89] found id: ""
	I0328 22:12:29.570472 1338826 logs.go:276] 1 containers: [accf5dfc23ea01d34e6a9b748711dcadd8f2af898f72bbc4348b539768308591]
	I0328 22:12:29.570548 1338826 ssh_runner.go:195] Run: which crictl
	I0328 22:12:29.574273 1338826 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 22:12:29.574347 1338826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 22:12:29.615918 1338826 cri.go:89] found id: "f410c25189f5f56842f0d6f5e959b565b53ee6a8d5258db914882d1e38e9ab2b"
	I0328 22:12:29.615941 1338826 cri.go:89] found id: ""
	I0328 22:12:29.615948 1338826 logs.go:276] 1 containers: [f410c25189f5f56842f0d6f5e959b565b53ee6a8d5258db914882d1e38e9ab2b]
	I0328 22:12:29.616002 1338826 ssh_runner.go:195] Run: which crictl
	I0328 22:12:29.619941 1338826 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 22:12:29.620011 1338826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 22:12:29.656396 1338826 cri.go:89] found id: "dcb78e1b7465099ed99741a848314cba3d027ecfab8e44b022dc85231ab9a26d"
	I0328 22:12:29.656421 1338826 cri.go:89] found id: ""
	I0328 22:12:29.656429 1338826 logs.go:276] 1 containers: [dcb78e1b7465099ed99741a848314cba3d027ecfab8e44b022dc85231ab9a26d]
	I0328 22:12:29.656484 1338826 ssh_runner.go:195] Run: which crictl
	I0328 22:12:29.660191 1338826 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0328 22:12:29.660260 1338826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0328 22:12:29.697904 1338826 cri.go:89] found id: "dbebe9c9214b8446dd7e1ba58af2dcdda839029f75a4df5d16fce02680f0f723"
	I0328 22:12:29.697933 1338826 cri.go:89] found id: "de9034878b9e7c8d2e40e141b1c21f2a76586667af994d8465e9601d126955e0"
	I0328 22:12:29.697938 1338826 cri.go:89] found id: ""
	I0328 22:12:29.697945 1338826 logs.go:276] 2 containers: [dbebe9c9214b8446dd7e1ba58af2dcdda839029f75a4df5d16fce02680f0f723 de9034878b9e7c8d2e40e141b1c21f2a76586667af994d8465e9601d126955e0]
	I0328 22:12:29.698002 1338826 ssh_runner.go:195] Run: which crictl
	I0328 22:12:29.701729 1338826 ssh_runner.go:195] Run: which crictl
	I0328 22:12:29.705384 1338826 logs.go:123] Gathering logs for kube-proxy [99963dd8c6223acf3495ce9e9c35c7f4c45a97d49ba1435654dd656d82537a95] ...
	I0328 22:12:29.705454 1338826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 99963dd8c6223acf3495ce9e9c35c7f4c45a97d49ba1435654dd656d82537a95"
	I0328 22:12:29.749991 1338826 logs.go:123] Gathering logs for storage-provisioner [dbebe9c9214b8446dd7e1ba58af2dcdda839029f75a4df5d16fce02680f0f723] ...
	I0328 22:12:29.750020 1338826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbebe9c9214b8446dd7e1ba58af2dcdda839029f75a4df5d16fce02680f0f723"
	I0328 22:12:29.795978 1338826 logs.go:123] Gathering logs for storage-provisioner [de9034878b9e7c8d2e40e141b1c21f2a76586667af994d8465e9601d126955e0] ...
	I0328 22:12:29.796009 1338826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de9034878b9e7c8d2e40e141b1c21f2a76586667af994d8465e9601d126955e0"
	I0328 22:12:29.859914 1338826 logs.go:123] Gathering logs for dmesg ...
	I0328 22:12:29.859942 1338826 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 22:12:29.880830 1338826 logs.go:123] Gathering logs for etcd [54454ca60825d1d8afde9d3954763066222f962cd282dd4ae5776677f1d02263] ...
	I0328 22:12:29.880857 1338826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54454ca60825d1d8afde9d3954763066222f962cd282dd4ae5776677f1d02263"
	I0328 22:12:29.935958 1338826 logs.go:123] Gathering logs for kubernetes-dashboard [dcb78e1b7465099ed99741a848314cba3d027ecfab8e44b022dc85231ab9a26d] ...
	I0328 22:12:29.935992 1338826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dcb78e1b7465099ed99741a848314cba3d027ecfab8e44b022dc85231ab9a26d"
	I0328 22:12:29.981822 1338826 logs.go:123] Gathering logs for describe nodes ...
	I0328 22:12:29.981850 1338826 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 22:12:30.152830 1338826 logs.go:123] Gathering logs for kindnet [f410c25189f5f56842f0d6f5e959b565b53ee6a8d5258db914882d1e38e9ab2b] ...
	I0328 22:12:30.152868 1338826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f410c25189f5f56842f0d6f5e959b565b53ee6a8d5258db914882d1e38e9ab2b"
	I0328 22:12:30.199134 1338826 logs.go:123] Gathering logs for CRI-O ...
	I0328 22:12:30.199177 1338826 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 22:12:30.283626 1338826 logs.go:123] Gathering logs for container status ...
	I0328 22:12:30.283664 1338826 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 22:12:30.353160 1338826 logs.go:123] Gathering logs for kube-scheduler [0c1c9dee5f4ce24a9e91181634895b593f2aec0d21a11adbe16ddd1adce82d0b] ...
	I0328 22:12:30.353193 1338826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c1c9dee5f4ce24a9e91181634895b593f2aec0d21a11adbe16ddd1adce82d0b"
	I0328 22:12:30.405400 1338826 logs.go:123] Gathering logs for kube-controller-manager [accf5dfc23ea01d34e6a9b748711dcadd8f2af898f72bbc4348b539768308591] ...
	I0328 22:12:30.405432 1338826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 accf5dfc23ea01d34e6a9b748711dcadd8f2af898f72bbc4348b539768308591"
	I0328 22:12:30.506110 1338826 logs.go:123] Gathering logs for coredns [9a5cc20f063ad7f3789223916297cb6dbac2af4ce58e82d5bde29f4655036962] ...
	I0328 22:12:30.506147 1338826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a5cc20f063ad7f3789223916297cb6dbac2af4ce58e82d5bde29f4655036962"
	I0328 22:12:30.564490 1338826 logs.go:123] Gathering logs for kubelet ...
	I0328 22:12:30.564518 1338826 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0328 22:12:30.618150 1338826 logs.go:138] Found kubelet problem: Mar 28 22:06:47 old-k8s-version-633693 kubelet[742]: E0328 22:06:47.826315     742 reflector.go:138] object-"kube-system"/"storage-provisioner-token-htmqq": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-htmqq" is forbidden: User "system:node:old-k8s-version-633693" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-633693' and this object
	W0328 22:12:30.618396 1338826 logs.go:138] Found kubelet problem: Mar 28 22:06:47 old-k8s-version-633693 kubelet[742]: E0328 22:06:47.826559     742 reflector.go:138] object-"default"/"default-token-skqvg": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-skqvg" is forbidden: User "system:node:old-k8s-version-633693" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-633693' and this object
	W0328 22:12:30.618624 1338826 logs.go:138] Found kubelet problem: Mar 28 22:06:47 old-k8s-version-633693 kubelet[742]: E0328 22:06:47.826733     742 reflector.go:138] object-"kube-system"/"metrics-server-token-qkmwr": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-qkmwr" is forbidden: User "system:node:old-k8s-version-633693" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-633693' and this object
	W0328 22:12:30.618835 1338826 logs.go:138] Found kubelet problem: Mar 28 22:06:47 old-k8s-version-633693 kubelet[742]: E0328 22:06:47.829336     742 reflector.go:138] object-"kube-system"/"coredns-token-zjvsj": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-zjvsj" is forbidden: User "system:node:old-k8s-version-633693" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-633693' and this object
	W0328 22:12:30.619044 1338826 logs.go:138] Found kubelet problem: Mar 28 22:06:47 old-k8s-version-633693 kubelet[742]: E0328 22:06:47.829576     742 reflector.go:138] object-"kube-system"/"kindnet-token-g4wkb": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-g4wkb" is forbidden: User "system:node:old-k8s-version-633693" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-633693' and this object
	W0328 22:12:30.619249 1338826 logs.go:138] Found kubelet problem: Mar 28 22:06:47 old-k8s-version-633693 kubelet[742]: E0328 22:06:47.829784     742 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-633693" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-633693' and this object
	W0328 22:12:30.619463 1338826 logs.go:138] Found kubelet problem: Mar 28 22:06:47 old-k8s-version-633693 kubelet[742]: E0328 22:06:47.829969     742 reflector.go:138] object-"kube-system"/"kube-proxy-token-l4tct": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-l4tct" is forbidden: User "system:node:old-k8s-version-633693" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-633693' and this object
	W0328 22:12:30.619666 1338826 logs.go:138] Found kubelet problem: Mar 28 22:06:47 old-k8s-version-633693 kubelet[742]: E0328 22:06:47.830130     742 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-633693" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-633693' and this object
	W0328 22:12:30.628872 1338826 logs.go:138] Found kubelet problem: Mar 28 22:06:50 old-k8s-version-633693 kubelet[742]: E0328 22:06:50.527663     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0328 22:12:30.629070 1338826 logs.go:138] Found kubelet problem: Mar 28 22:06:50 old-k8s-version-633693 kubelet[742]: E0328 22:06:50.814044     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:30.631140 1338826 logs.go:138] Found kubelet problem: Mar 28 22:07:04 old-k8s-version-633693 kubelet[742]: E0328 22:07:04.699349     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0328 22:12:30.631470 1338826 logs.go:138] Found kubelet problem: Mar 28 22:07:07 old-k8s-version-633693 kubelet[742]: E0328 22:07:07.707146     742 reflector.go:138] object-"kubernetes-dashboard"/"kubernetes-dashboard-token-kddb6": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kubernetes-dashboard-token-kddb6" is forbidden: User "system:node:old-k8s-version-633693" cannot list resource "secrets" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'old-k8s-version-633693' and this object
	W0328 22:12:30.634771 1338826 logs.go:138] Found kubelet problem: Mar 28 22:07:15 old-k8s-version-633693 kubelet[742]: E0328 22:07:15.934579     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:30.635235 1338826 logs.go:138] Found kubelet problem: Mar 28 22:07:16 old-k8s-version-633693 kubelet[742]: E0328 22:07:16.937591     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:30.635423 1338826 logs.go:138] Found kubelet problem: Mar 28 22:07:19 old-k8s-version-633693 kubelet[742]: E0328 22:07:19.733048     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:30.635928 1338826 logs.go:138] Found kubelet problem: Mar 28 22:07:22 old-k8s-version-633693 kubelet[742]: E0328 22:07:22.808225     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:30.638464 1338826 logs.go:138] Found kubelet problem: Mar 28 22:07:34 old-k8s-version-633693 kubelet[742]: E0328 22:07:34.698485     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0328 22:12:30.639072 1338826 logs.go:138] Found kubelet problem: Mar 28 22:07:36 old-k8s-version-633693 kubelet[742]: E0328 22:07:36.977259     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:30.639400 1338826 logs.go:138] Found kubelet problem: Mar 28 22:07:42 old-k8s-version-633693 kubelet[742]: E0328 22:07:42.808747     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:30.639586 1338826 logs.go:138] Found kubelet problem: Mar 28 22:07:48 old-k8s-version-633693 kubelet[742]: E0328 22:07:48.688107     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:30.639926 1338826 logs.go:138] Found kubelet problem: Mar 28 22:07:54 old-k8s-version-633693 kubelet[742]: E0328 22:07:54.687538     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:30.640169 1338826 logs.go:138] Found kubelet problem: Mar 28 22:08:02 old-k8s-version-633693 kubelet[742]: E0328 22:08:02.688287     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:30.640768 1338826 logs.go:138] Found kubelet problem: Mar 28 22:08:09 old-k8s-version-633693 kubelet[742]: E0328 22:08:09.025390     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:30.641095 1338826 logs.go:138] Found kubelet problem: Mar 28 22:08:12 old-k8s-version-633693 kubelet[742]: E0328 22:08:12.808271     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:30.641278 1338826 logs.go:138] Found kubelet problem: Mar 28 22:08:14 old-k8s-version-633693 kubelet[742]: E0328 22:08:14.687911     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:30.643355 1338826 logs.go:138] Found kubelet problem: Mar 28 22:08:25 old-k8s-version-633693 kubelet[742]: E0328 22:08:25.704436     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0328 22:12:30.643711 1338826 logs.go:138] Found kubelet problem: Mar 28 22:08:27 old-k8s-version-633693 kubelet[742]: E0328 22:08:27.687797     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:30.643899 1338826 logs.go:138] Found kubelet problem: Mar 28 22:08:39 old-k8s-version-633693 kubelet[742]: E0328 22:08:39.688581     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:30.644248 1338826 logs.go:138] Found kubelet problem: Mar 28 22:08:41 old-k8s-version-633693 kubelet[742]: E0328 22:08:41.687550     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:30.645266 1338826 logs.go:138] Found kubelet problem: Mar 28 22:08:51 old-k8s-version-633693 kubelet[742]: E0328 22:08:51.689572     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:30.645867 1338826 logs.go:138] Found kubelet problem: Mar 28 22:08:55 old-k8s-version-633693 kubelet[742]: E0328 22:08:55.094326     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:30.646051 1338826 logs.go:138] Found kubelet problem: Mar 28 22:09:02 old-k8s-version-633693 kubelet[742]: E0328 22:09:02.687902     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:30.646397 1338826 logs.go:138] Found kubelet problem: Mar 28 22:09:02 old-k8s-version-633693 kubelet[742]: E0328 22:09:02.808415     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:30.646732 1338826 logs.go:138] Found kubelet problem: Mar 28 22:09:13 old-k8s-version-633693 kubelet[742]: E0328 22:09:13.689276     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:30.646918 1338826 logs.go:138] Found kubelet problem: Mar 28 22:09:14 old-k8s-version-633693 kubelet[742]: E0328 22:09:14.688165     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:30.647245 1338826 logs.go:138] Found kubelet problem: Mar 28 22:09:24 old-k8s-version-633693 kubelet[742]: E0328 22:09:24.687486     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:30.647451 1338826 logs.go:138] Found kubelet problem: Mar 28 22:09:29 old-k8s-version-633693 kubelet[742]: E0328 22:09:29.688477     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:30.647780 1338826 logs.go:138] Found kubelet problem: Mar 28 22:09:35 old-k8s-version-633693 kubelet[742]: E0328 22:09:35.687479     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:30.648521 1338826 logs.go:138] Found kubelet problem: Mar 28 22:09:40 old-k8s-version-633693 kubelet[742]: E0328 22:09:40.688238     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:30.648899 1338826 logs.go:138] Found kubelet problem: Mar 28 22:09:48 old-k8s-version-633693 kubelet[742]: E0328 22:09:48.687433     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:30.651003 1338826 logs.go:138] Found kubelet problem: Mar 28 22:09:51 old-k8s-version-633693 kubelet[742]: E0328 22:09:51.697009     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0328 22:12:30.651336 1338826 logs.go:138] Found kubelet problem: Mar 28 22:10:02 old-k8s-version-633693 kubelet[742]: E0328 22:10:02.687437     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:30.651523 1338826 logs.go:138] Found kubelet problem: Mar 28 22:10:03 old-k8s-version-633693 kubelet[742]: E0328 22:10:03.688247     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:30.652119 1338826 logs.go:138] Found kubelet problem: Mar 28 22:10:16 old-k8s-version-633693 kubelet[742]: E0328 22:10:16.214323     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:30.652326 1338826 logs.go:138] Found kubelet problem: Mar 28 22:10:16 old-k8s-version-633693 kubelet[742]: E0328 22:10:16.688145     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:30.652674 1338826 logs.go:138] Found kubelet problem: Mar 28 22:10:22 old-k8s-version-633693 kubelet[742]: E0328 22:10:22.808176     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:30.652864 1338826 logs.go:138] Found kubelet problem: Mar 28 22:10:29 old-k8s-version-633693 kubelet[742]: E0328 22:10:29.688127     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:30.653193 1338826 logs.go:138] Found kubelet problem: Mar 28 22:10:33 old-k8s-version-633693 kubelet[742]: E0328 22:10:33.687521     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:30.653377 1338826 logs.go:138] Found kubelet problem: Mar 28 22:10:40 old-k8s-version-633693 kubelet[742]: E0328 22:10:40.688295     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:30.653719 1338826 logs.go:138] Found kubelet problem: Mar 28 22:10:46 old-k8s-version-633693 kubelet[742]: E0328 22:10:46.687410     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:30.653906 1338826 logs.go:138] Found kubelet problem: Mar 28 22:10:52 old-k8s-version-633693 kubelet[742]: E0328 22:10:52.688204     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:30.654232 1338826 logs.go:138] Found kubelet problem: Mar 28 22:10:57 old-k8s-version-633693 kubelet[742]: E0328 22:10:57.687778     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:30.654416 1338826 logs.go:138] Found kubelet problem: Mar 28 22:11:07 old-k8s-version-633693 kubelet[742]: E0328 22:11:07.688569     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:30.654742 1338826 logs.go:138] Found kubelet problem: Mar 28 22:11:10 old-k8s-version-633693 kubelet[742]: E0328 22:11:10.687505     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:30.654925 1338826 logs.go:138] Found kubelet problem: Mar 28 22:11:21 old-k8s-version-633693 kubelet[742]: E0328 22:11:21.688528     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:30.655250 1338826 logs.go:138] Found kubelet problem: Mar 28 22:11:25 old-k8s-version-633693 kubelet[742]: E0328 22:11:25.687833     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:30.655433 1338826 logs.go:138] Found kubelet problem: Mar 28 22:11:36 old-k8s-version-633693 kubelet[742]: E0328 22:11:36.687961     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:30.656006 1338826 logs.go:138] Found kubelet problem: Mar 28 22:11:39 old-k8s-version-633693 kubelet[742]: E0328 22:11:39.687565     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:30.656212 1338826 logs.go:138] Found kubelet problem: Mar 28 22:11:51 old-k8s-version-633693 kubelet[742]: E0328 22:11:51.689147     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:30.656539 1338826 logs.go:138] Found kubelet problem: Mar 28 22:11:54 old-k8s-version-633693 kubelet[742]: E0328 22:11:54.687465     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:30.656739 1338826 logs.go:138] Found kubelet problem: Mar 28 22:12:02 old-k8s-version-633693 kubelet[742]: E0328 22:12:02.688202     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:30.657073 1338826 logs.go:138] Found kubelet problem: Mar 28 22:12:06 old-k8s-version-633693 kubelet[742]: E0328 22:12:06.687449     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:30.657397 1338826 logs.go:138] Found kubelet problem: Mar 28 22:12:17 old-k8s-version-633693 kubelet[742]: E0328 22:12:17.687727     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:30.657583 1338826 logs.go:138] Found kubelet problem: Mar 28 22:12:17 old-k8s-version-633693 kubelet[742]: E0328 22:12:17.689382     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0328 22:12:30.657593 1338826 logs.go:123] Gathering logs for kube-apiserver [aafa2b2860b7d02f5f63f8f79e578efc4b8a612e8845f3c01b98f89c881a05f5] ...
	I0328 22:12:30.657607 1338826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aafa2b2860b7d02f5f63f8f79e578efc4b8a612e8845f3c01b98f89c881a05f5"
	I0328 22:12:30.726909 1338826 out.go:304] Setting ErrFile to fd 2...
	I0328 22:12:30.726942 1338826 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0328 22:12:30.727001 1338826 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0328 22:12:30.727013 1338826 out.go:239]   Mar 28 22:11:54 old-k8s-version-633693 kubelet[742]: E0328 22:11:54.687465     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	  Mar 28 22:11:54 old-k8s-version-633693 kubelet[742]: E0328 22:11:54.687465     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:30.727021 1338826 out.go:239]   Mar 28 22:12:02 old-k8s-version-633693 kubelet[742]: E0328 22:12:02.688202     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Mar 28 22:12:02 old-k8s-version-633693 kubelet[742]: E0328 22:12:02.688202     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:30.727028 1338826 out.go:239]   Mar 28 22:12:06 old-k8s-version-633693 kubelet[742]: E0328 22:12:06.687449     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	  Mar 28 22:12:06 old-k8s-version-633693 kubelet[742]: E0328 22:12:06.687449     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:30.727039 1338826 out.go:239]   Mar 28 22:12:17 old-k8s-version-633693 kubelet[742]: E0328 22:12:17.687727     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	  Mar 28 22:12:17 old-k8s-version-633693 kubelet[742]: E0328 22:12:17.687727     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:30.727048 1338826 out.go:239]   Mar 28 22:12:17 old-k8s-version-633693 kubelet[742]: E0328 22:12:17.689382     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Mar 28 22:12:17 old-k8s-version-633693 kubelet[742]: E0328 22:12:17.689382     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0328 22:12:30.727062 1338826 out.go:304] Setting ErrFile to fd 2...
	I0328 22:12:30.727068 1338826 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 22:12:40.727728 1338826 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0328 22:12:40.736874 1338826 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0328 22:12:40.739297 1338826 out.go:177] 
	W0328 22:12:40.741210 1338826 out.go:239] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0328 22:12:40.741248 1338826 out.go:239] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	* Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0328 22:12:40.741269 1338826 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	* Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0328 22:12:40.741277 1338826 out.go:239] * 
	* 
	W0328 22:12:40.742279 1338826 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0328 22:12:40.744622 1338826 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-633693 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-633693
helpers_test.go:235: (dbg) docker inspect old-k8s-version-633693:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fe636ffa3a97a0d6d85a45fcf40320435f4f1adeb9e541a502313138a659e23d",
	        "Created": "2024-03-28T22:03:42.343937493Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1339095,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-03-28T22:06:21.017617251Z",
	            "FinishedAt": "2024-03-28T22:06:19.787697003Z"
	        },
	        "Image": "sha256:d0f05b8b802e4c4af20a90d686bad8329f07849a8fda1b1d1c1dc3f527691df0",
	        "ResolvConfPath": "/var/lib/docker/containers/fe636ffa3a97a0d6d85a45fcf40320435f4f1adeb9e541a502313138a659e23d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fe636ffa3a97a0d6d85a45fcf40320435f4f1adeb9e541a502313138a659e23d/hostname",
	        "HostsPath": "/var/lib/docker/containers/fe636ffa3a97a0d6d85a45fcf40320435f4f1adeb9e541a502313138a659e23d/hosts",
	        "LogPath": "/var/lib/docker/containers/fe636ffa3a97a0d6d85a45fcf40320435f4f1adeb9e541a502313138a659e23d/fe636ffa3a97a0d6d85a45fcf40320435f4f1adeb9e541a502313138a659e23d-json.log",
	        "Name": "/old-k8s-version-633693",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-633693:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-633693",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b1bb84e177934ca9d1e28a6b32442cc506355d3c0f060bb86b85f2547add112f-init/diff:/var/lib/docker/overlay2/0b3d5a8e71016a91702d908cf9c681d5044b73b0921a0445a612c018590a7fd5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b1bb84e177934ca9d1e28a6b32442cc506355d3c0f060bb86b85f2547add112f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b1bb84e177934ca9d1e28a6b32442cc506355d3c0f060bb86b85f2547add112f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b1bb84e177934ca9d1e28a6b32442cc506355d3c0f060bb86b85f2547add112f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-633693",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-633693/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-633693",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-633693",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-633693",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "bfab9cd129f3896983537e7c769fa9e01972d96004aae7d71c0ac352f89c206e",
	            "SandboxKey": "/var/run/docker/netns/bfab9cd129f3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34554"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34553"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34550"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34552"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34551"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-633693": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "NetworkID": "d5924c2fd712aa94e2820ede0138d57a37e8180fad21d56577897748b25f755b",
	                    "EndpointID": "deddd649e6507b7d19733664775711215de826fe39b238bc6bb9aa4d6c779464",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-633693",
	                        "fe636ffa3a97"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-633693 -n old-k8s-version-633693
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-633693 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-633693 logs -n 25: (1.64263639s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|--------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |         Profile          |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|--------------------------|---------|----------------|---------------------|---------------------|
	| start   | -p force-systemd-env-978460                            | force-systemd-env-978460 | jenkins | v1.33.0-beta.0 | 28 Mar 24 22:02 UTC | 28 Mar 24 22:02 UTC |
	|         | --memory=2048                                          |                          |         |                |                     |                     |
	|         | --alsologtostderr                                      |                          |         |                |                     |                     |
	|         | -v=5 --driver=docker                                   |                          |         |                |                     |                     |
	|         | --container-runtime=crio                               |                          |         |                |                     |                     |
	| ssh     | -p cilium-428181 sudo                                  | cilium-428181            | jenkins | v1.33.0-beta.0 | 28 Mar 24 22:02 UTC |                     |
	|         | containerd config dump                                 |                          |         |                |                     |                     |
	| ssh     | -p cilium-428181 sudo                                  | cilium-428181            | jenkins | v1.33.0-beta.0 | 28 Mar 24 22:02 UTC |                     |
	|         | systemctl status crio --all                            |                          |         |                |                     |                     |
	|         | --full --no-pager                                      |                          |         |                |                     |                     |
	| ssh     | -p cilium-428181 sudo                                  | cilium-428181            | jenkins | v1.33.0-beta.0 | 28 Mar 24 22:02 UTC |                     |
	|         | systemctl cat crio --no-pager                          |                          |         |                |                     |                     |
	| ssh     | -p cilium-428181 sudo find                             | cilium-428181            | jenkins | v1.33.0-beta.0 | 28 Mar 24 22:02 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                          |                          |         |                |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                          |         |                |                     |                     |
	| ssh     | -p cilium-428181 sudo crio                             | cilium-428181            | jenkins | v1.33.0-beta.0 | 28 Mar 24 22:02 UTC |                     |
	|         | config                                                 |                          |         |                |                     |                     |
	| delete  | -p cilium-428181                                       | cilium-428181            | jenkins | v1.33.0-beta.0 | 28 Mar 24 22:02 UTC | 28 Mar 24 22:02 UTC |
	| start   | -p cert-expiration-478493                              | cert-expiration-478493   | jenkins | v1.33.0-beta.0 | 28 Mar 24 22:02 UTC | 28 Mar 24 22:02 UTC |
	|         | --memory=2048                                          |                          |         |                |                     |                     |
	|         | --cert-expiration=3m                                   |                          |         |                |                     |                     |
	|         | --driver=docker                                        |                          |         |                |                     |                     |
	|         | --container-runtime=crio                               |                          |         |                |                     |                     |
	| delete  | -p force-systemd-env-978460                            | force-systemd-env-978460 | jenkins | v1.33.0-beta.0 | 28 Mar 24 22:02 UTC | 28 Mar 24 22:03 UTC |
	| start   | -p cert-options-487813                                 | cert-options-487813      | jenkins | v1.33.0-beta.0 | 28 Mar 24 22:03 UTC | 28 Mar 24 22:03 UTC |
	|         | --memory=2048                                          |                          |         |                |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                          |         |                |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                          |         |                |                     |                     |
	|         | --apiserver-names=localhost                            |                          |         |                |                     |                     |
	|         | --apiserver-names=www.google.com                       |                          |         |                |                     |                     |
	|         | --apiserver-port=8555                                  |                          |         |                |                     |                     |
	|         | --driver=docker                                        |                          |         |                |                     |                     |
	|         | --container-runtime=crio                               |                          |         |                |                     |                     |
	| ssh     | cert-options-487813 ssh                                | cert-options-487813      | jenkins | v1.33.0-beta.0 | 28 Mar 24 22:03 UTC | 28 Mar 24 22:03 UTC |
	|         | openssl x509 -text -noout -in                          |                          |         |                |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                          |         |                |                     |                     |
	| ssh     | -p cert-options-487813 -- sudo                         | cert-options-487813      | jenkins | v1.33.0-beta.0 | 28 Mar 24 22:03 UTC | 28 Mar 24 22:03 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                          |         |                |                     |                     |
	| delete  | -p cert-options-487813                                 | cert-options-487813      | jenkins | v1.33.0-beta.0 | 28 Mar 24 22:03 UTC | 28 Mar 24 22:03 UTC |
	| start   | -p old-k8s-version-633693                              | old-k8s-version-633693   | jenkins | v1.33.0-beta.0 | 28 Mar 24 22:03 UTC | 28 Mar 24 22:05 UTC |
	|         | --memory=2200                                          |                          |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |                |                     |                     |
	|         | --kvm-network=default                                  |                          |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                          |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                          |         |                |                     |                     |
	|         | --keep-context=false                                   |                          |         |                |                     |                     |
	|         | --driver=docker                                        |                          |         |                |                     |                     |
	|         | --container-runtime=crio                               |                          |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                          |         |                |                     |                     |
	| start   | -p cert-expiration-478493                              | cert-expiration-478493   | jenkins | v1.33.0-beta.0 | 28 Mar 24 22:05 UTC | 28 Mar 24 22:06 UTC |
	|         | --memory=2048                                          |                          |         |                |                     |                     |
	|         | --cert-expiration=8760h                                |                          |         |                |                     |                     |
	|         | --driver=docker                                        |                          |         |                |                     |                     |
	|         | --container-runtime=crio                               |                          |         |                |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-633693        | old-k8s-version-633693   | jenkins | v1.33.0-beta.0 | 28 Mar 24 22:06 UTC | 28 Mar 24 22:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                          |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                          |         |                |                     |                     |
	| stop    | -p old-k8s-version-633693                              | old-k8s-version-633693   | jenkins | v1.33.0-beta.0 | 28 Mar 24 22:06 UTC | 28 Mar 24 22:06 UTC |
	|         | --alsologtostderr -v=3                                 |                          |         |                |                     |                     |
	| delete  | -p cert-expiration-478493                              | cert-expiration-478493   | jenkins | v1.33.0-beta.0 | 28 Mar 24 22:06 UTC | 28 Mar 24 22:06 UTC |
	| start   | -p no-preload-363849 --memory=2200                     | no-preload-363849        | jenkins | v1.33.0-beta.0 | 28 Mar 24 22:06 UTC | 28 Mar 24 22:07 UTC |
	|         | --alsologtostderr --wait=true                          |                          |         |                |                     |                     |
	|         | --preload=false --driver=docker                        |                          |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                          |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0                    |                          |         |                |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-633693             | old-k8s-version-633693   | jenkins | v1.33.0-beta.0 | 28 Mar 24 22:06 UTC | 28 Mar 24 22:06 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                          |         |                |                     |                     |
	| start   | -p old-k8s-version-633693                              | old-k8s-version-633693   | jenkins | v1.33.0-beta.0 | 28 Mar 24 22:06 UTC |                     |
	|         | --memory=2200                                          |                          |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |                |                     |                     |
	|         | --kvm-network=default                                  |                          |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                          |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                          |         |                |                     |                     |
	|         | --keep-context=false                                   |                          |         |                |                     |                     |
	|         | --driver=docker                                        |                          |         |                |                     |                     |
	|         | --container-runtime=crio                               |                          |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                          |         |                |                     |                     |
	| addons  | enable metrics-server -p no-preload-363849             | no-preload-363849        | jenkins | v1.33.0-beta.0 | 28 Mar 24 22:07 UTC | 28 Mar 24 22:07 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                          |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                          |         |                |                     |                     |
	| stop    | -p no-preload-363849                                   | no-preload-363849        | jenkins | v1.33.0-beta.0 | 28 Mar 24 22:07 UTC | 28 Mar 24 22:07 UTC |
	|         | --alsologtostderr -v=3                                 |                          |         |                |                     |                     |
	| addons  | enable dashboard -p no-preload-363849                  | no-preload-363849        | jenkins | v1.33.0-beta.0 | 28 Mar 24 22:07 UTC | 28 Mar 24 22:07 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                          |         |                |                     |                     |
	| start   | -p no-preload-363849 --memory=2200                     | no-preload-363849        | jenkins | v1.33.0-beta.0 | 28 Mar 24 22:07 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |                |                     |                     |
	|         | --preload=false --driver=docker                        |                          |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                          |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0                    |                          |         |                |                     |                     |
	|---------|--------------------------------------------------------|--------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/28 22:07:48
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0328 22:07:48.620080 1343785 out.go:291] Setting OutFile to fd 1 ...
	I0328 22:07:48.620269 1343785 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 22:07:48.620280 1343785 out.go:304] Setting ErrFile to fd 2...
	I0328 22:07:48.620285 1343785 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 22:07:48.620542 1343785 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17877-1145955/.minikube/bin
	I0328 22:07:48.620929 1343785 out.go:298] Setting JSON to false
	I0328 22:07:48.622023 1343785 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":21019,"bootTime":1711642650,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1056-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0328 22:07:48.622105 1343785 start.go:139] virtualization:  
	I0328 22:07:48.625003 1343785 out.go:177] * [no-preload-363849] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	I0328 22:07:48.627230 1343785 out.go:177]   - MINIKUBE_LOCATION=17877
	I0328 22:07:48.629021 1343785 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 22:07:48.627436 1343785 notify.go:220] Checking for updates...
	I0328 22:07:48.632649 1343785 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17877-1145955/kubeconfig
	I0328 22:07:48.634987 1343785 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17877-1145955/.minikube
	I0328 22:07:48.636705 1343785 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0328 22:07:48.638234 1343785 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 22:07:48.640431 1343785 config.go:182] Loaded profile config "no-preload-363849": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0-beta.0
	I0328 22:07:48.640972 1343785 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 22:07:48.660681 1343785 docker.go:122] docker version: linux-26.0.0:Docker Engine - Community
	I0328 22:07:48.660840 1343785 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0328 22:07:48.748915 1343785 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:52 SystemTime:2024-03-28 22:07:48.737712591 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0328 22:07:48.749025 1343785 docker.go:295] overlay module found
	I0328 22:07:48.752687 1343785 out.go:177] * Using the docker driver based on existing profile
	I0328 22:07:48.754486 1343785 start.go:297] selected driver: docker
	I0328 22:07:48.754503 1343785 start.go:901] validating driver "docker" against &{Name:no-preload-363849 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-beta.0 ClusterName:no-preload-363849 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false Mou
ntString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 22:07:48.754638 1343785 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 22:07:48.755308 1343785 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0328 22:07:48.820282 1343785 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:52 SystemTime:2024-03-28 22:07:48.809879867 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0328 22:07:48.820646 1343785 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 22:07:48.820716 1343785 cni.go:84] Creating CNI manager for ""
	I0328 22:07:48.820732 1343785 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0328 22:07:48.820777 1343785 start.go:340] cluster config:
	{Name:no-preload-363849 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-beta.0 ClusterName:no-preload-363849 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 22:07:48.824217 1343785 out.go:177] * Starting "no-preload-363849" primary control-plane node in "no-preload-363849" cluster
	I0328 22:07:48.826415 1343785 cache.go:121] Beginning downloading kic base image for docker with crio
	I0328 22:07:48.828508 1343785 out.go:177] * Pulling base image v0.0.43-1711559786-18485 ...
	I0328 22:07:48.830536 1343785 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime crio
	I0328 22:07:48.830622 1343785 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 in local docker daemon
	I0328 22:07:48.830743 1343785 profile.go:143] Saving config to /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/no-preload-363849/config.json ...
	I0328 22:07:48.831051 1343785 cache.go:107] acquiring lock: {Name:mke5e55ffb3fe8eafe321d175dc2a8af9b64484c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 22:07:48.831148 1343785 cache.go:115] /home/jenkins/minikube-integration/17877-1145955/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0328 22:07:48.831161 1343785 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17877-1145955/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 116.873µs
	I0328 22:07:48.831177 1343785 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17877-1145955/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0328 22:07:48.831194 1343785 cache.go:107] acquiring lock: {Name:mk456b65bf19a94d96c5698786230d6851db1915 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 22:07:48.831230 1343785 cache.go:115] /home/jenkins/minikube-integration/17877-1145955/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0-beta.0 exists
	I0328 22:07:48.831239 1343785 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.30.0-beta.0" -> "/home/jenkins/minikube-integration/17877-1145955/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0-beta.0" took 47.212µs
	I0328 22:07:48.831246 1343785 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.30.0-beta.0 -> /home/jenkins/minikube-integration/17877-1145955/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0-beta.0 succeeded
	I0328 22:07:48.831269 1343785 cache.go:107] acquiring lock: {Name:mkdb2bc151bf6319b00e147f017e9d57994ce976 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 22:07:48.831303 1343785 cache.go:115] /home/jenkins/minikube-integration/17877-1145955/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0-beta.0 exists
	I0328 22:07:48.831312 1343785 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.30.0-beta.0" -> "/home/jenkins/minikube-integration/17877-1145955/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0-beta.0" took 45.415µs
	I0328 22:07:48.831320 1343785 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.30.0-beta.0 -> /home/jenkins/minikube-integration/17877-1145955/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0-beta.0 succeeded
	I0328 22:07:48.831335 1343785 cache.go:107] acquiring lock: {Name:mka65a8818f844da73cc580450e190a5bbc15dc2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 22:07:48.831427 1343785 cache.go:115] /home/jenkins/minikube-integration/17877-1145955/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0-beta.0 exists
	I0328 22:07:48.831442 1343785 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.30.0-beta.0" -> "/home/jenkins/minikube-integration/17877-1145955/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0-beta.0" took 107.757µs
	I0328 22:07:48.831449 1343785 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.30.0-beta.0 -> /home/jenkins/minikube-integration/17877-1145955/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0-beta.0 succeeded
	I0328 22:07:48.831460 1343785 cache.go:107] acquiring lock: {Name:mk544031203ecf3fa88fef78d19d5ed195123373 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 22:07:48.831499 1343785 cache.go:115] /home/jenkins/minikube-integration/17877-1145955/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0-beta.0 exists
	I0328 22:07:48.831511 1343785 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.30.0-beta.0" -> "/home/jenkins/minikube-integration/17877-1145955/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0-beta.0" took 51.889µs
	I0328 22:07:48.831517 1343785 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.30.0-beta.0 -> /home/jenkins/minikube-integration/17877-1145955/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0-beta.0 succeeded
	I0328 22:07:48.831527 1343785 cache.go:107] acquiring lock: {Name:mk52a2be272d6f2ede165aed0a5a70989754b6b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 22:07:48.831558 1343785 cache.go:115] /home/jenkins/minikube-integration/17877-1145955/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0328 22:07:48.831563 1343785 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/home/jenkins/minikube-integration/17877-1145955/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 37.957µs
	I0328 22:07:48.831569 1343785 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /home/jenkins/minikube-integration/17877-1145955/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0328 22:07:48.831578 1343785 cache.go:107] acquiring lock: {Name:mk6abde3ba85d27b6e675012e9e17c35a5ee7868 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 22:07:48.831607 1343785 cache.go:115] /home/jenkins/minikube-integration/17877-1145955/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 exists
	I0328 22:07:48.831623 1343785 cache.go:96] cache image "registry.k8s.io/etcd:3.5.12-0" -> "/home/jenkins/minikube-integration/17877-1145955/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0" took 38.827µs
	I0328 22:07:48.831633 1343785 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.12-0 -> /home/jenkins/minikube-integration/17877-1145955/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 succeeded
	I0328 22:07:48.831642 1343785 cache.go:107] acquiring lock: {Name:mk62ee49752e0d711461ab19e817d9681b21c546 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 22:07:48.831673 1343785 cache.go:115] /home/jenkins/minikube-integration/17877-1145955/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0328 22:07:48.831682 1343785 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/home/jenkins/minikube-integration/17877-1145955/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 41.148µs
	I0328 22:07:48.831688 1343785 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /home/jenkins/minikube-integration/17877-1145955/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0328 22:07:48.831693 1343785 cache.go:87] Successfully saved all images to host disk.
	I0328 22:07:48.853996 1343785 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 in local docker daemon, skipping pull
	I0328 22:07:48.854020 1343785 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 exists in daemon, skipping load
	I0328 22:07:48.854059 1343785 cache.go:194] Successfully downloaded all kic artifacts
	I0328 22:07:48.854088 1343785 start.go:360] acquireMachinesLock for no-preload-363849: {Name:mk59ba1b235dae08e2a8e3c6c446dcb470763db9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 22:07:48.854149 1343785 start.go:364] duration metric: took 42.699µs to acquireMachinesLock for "no-preload-363849"
	I0328 22:07:48.854167 1343785 start.go:96] Skipping create...Using existing machine configuration
	I0328 22:07:48.854173 1343785 fix.go:54] fixHost starting: 
	I0328 22:07:48.854440 1343785 cli_runner.go:164] Run: docker container inspect no-preload-363849 --format={{.State.Status}}
	I0328 22:07:48.869716 1343785 fix.go:112] recreateIfNeeded on no-preload-363849: state=Stopped err=<nil>
	W0328 22:07:48.869749 1343785 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 22:07:48.872535 1343785 out.go:177] * Restarting existing docker container for "no-preload-363849" ...
	I0328 22:07:46.391837 1338826 pod_ready.go:102] pod "etcd-old-k8s-version-633693" in "kube-system" namespace has status "Ready":"False"
	I0328 22:07:48.393916 1338826 pod_ready.go:102] pod "etcd-old-k8s-version-633693" in "kube-system" namespace has status "Ready":"False"
	I0328 22:07:48.874374 1343785 cli_runner.go:164] Run: docker start no-preload-363849
	I0328 22:07:49.139516 1343785 cli_runner.go:164] Run: docker container inspect no-preload-363849 --format={{.State.Status}}
	I0328 22:07:49.167422 1343785 kic.go:430] container "no-preload-363849" state is running.
	I0328 22:07:49.167812 1343785 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-363849
	I0328 22:07:49.189933 1343785 profile.go:143] Saving config to /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/no-preload-363849/config.json ...
	I0328 22:07:49.190174 1343785 machine.go:94] provisionDockerMachine start ...
	I0328 22:07:49.190236 1343785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-363849
	I0328 22:07:49.208226 1343785 main.go:141] libmachine: Using SSH client type: native
	I0328 22:07:49.208550 1343785 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 34559 <nil> <nil>}
	I0328 22:07:49.208566 1343785 main.go:141] libmachine: About to run SSH command:
	hostname
	I0328 22:07:49.209265 1343785 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0328 22:07:52.347432 1343785 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-363849
	
	I0328 22:07:52.347471 1343785 ubuntu.go:169] provisioning hostname "no-preload-363849"
	I0328 22:07:52.347535 1343785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-363849
	I0328 22:07:52.364691 1343785 main.go:141] libmachine: Using SSH client type: native
	I0328 22:07:52.364941 1343785 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 34559 <nil> <nil>}
	I0328 22:07:52.364958 1343785 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-363849 && echo "no-preload-363849" | sudo tee /etc/hostname
	I0328 22:07:52.520303 1343785 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-363849
	
	I0328 22:07:52.520380 1343785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-363849
	I0328 22:07:52.536005 1343785 main.go:141] libmachine: Using SSH client type: native
	I0328 22:07:52.536283 1343785 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 34559 <nil> <nil>}
	I0328 22:07:52.536306 1343785 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-363849' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-363849/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-363849' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0328 22:07:52.676205 1343785 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 22:07:52.676234 1343785 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17877-1145955/.minikube CaCertPath:/home/jenkins/minikube-integration/17877-1145955/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17877-1145955/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17877-1145955/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17877-1145955/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17877-1145955/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17877-1145955/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17877-1145955/.minikube}
	I0328 22:07:52.676305 1343785 ubuntu.go:177] setting up certificates
	I0328 22:07:52.676317 1343785 provision.go:84] configureAuth start
	I0328 22:07:52.676391 1343785 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-363849
	I0328 22:07:52.691089 1343785 provision.go:143] copyHostCerts
	I0328 22:07:52.691154 1343785 exec_runner.go:144] found /home/jenkins/minikube-integration/17877-1145955/.minikube/ca.pem, removing ...
	I0328 22:07:52.691172 1343785 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17877-1145955/.minikube/ca.pem
	I0328 22:07:52.691246 1343785 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17877-1145955/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17877-1145955/.minikube/ca.pem (1082 bytes)
	I0328 22:07:52.691342 1343785 exec_runner.go:144] found /home/jenkins/minikube-integration/17877-1145955/.minikube/cert.pem, removing ...
	I0328 22:07:52.691356 1343785 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17877-1145955/.minikube/cert.pem
	I0328 22:07:52.691384 1343785 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17877-1145955/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17877-1145955/.minikube/cert.pem (1123 bytes)
	I0328 22:07:52.691439 1343785 exec_runner.go:144] found /home/jenkins/minikube-integration/17877-1145955/.minikube/key.pem, removing ...
	I0328 22:07:52.691453 1343785 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17877-1145955/.minikube/key.pem
	I0328 22:07:52.691481 1343785 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17877-1145955/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17877-1145955/.minikube/key.pem (1679 bytes)
	I0328 22:07:52.691531 1343785 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17877-1145955/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17877-1145955/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17877-1145955/.minikube/certs/ca-key.pem org=jenkins.no-preload-363849 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-363849]
	I0328 22:07:52.985314 1343785 provision.go:177] copyRemoteCerts
	I0328 22:07:52.985390 1343785 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0328 22:07:52.985434 1343785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-363849
	I0328 22:07:53.015581 1343785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34559 SSHKeyPath:/home/jenkins/minikube-integration/17877-1145955/.minikube/machines/no-preload-363849/id_rsa Username:docker}
	I0328 22:07:53.117192 1343785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17877-1145955/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0328 22:07:53.143221 1343785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17877-1145955/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0328 22:07:53.168940 1343785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17877-1145955/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0328 22:07:53.194153 1343785 provision.go:87] duration metric: took 517.818912ms to configureAuth
	I0328 22:07:53.194189 1343785 ubuntu.go:193] setting minikube options for container-runtime
	I0328 22:07:53.194393 1343785 config.go:182] Loaded profile config "no-preload-363849": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0-beta.0
	I0328 22:07:53.194506 1343785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-363849
	I0328 22:07:53.209637 1343785 main.go:141] libmachine: Using SSH client type: native
	I0328 22:07:53.209893 1343785 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 34559 <nil> <nil>}
	I0328 22:07:53.209916 1343785 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0328 22:07:53.599961 1343785 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0328 22:07:53.599985 1343785 machine.go:97] duration metric: took 4.409799859s to provisionDockerMachine
	I0328 22:07:53.599996 1343785 start.go:293] postStartSetup for "no-preload-363849" (driver="docker")
	I0328 22:07:53.600009 1343785 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0328 22:07:53.600082 1343785 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0328 22:07:53.600151 1343785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-363849
	I0328 22:07:53.623997 1343785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34559 SSHKeyPath:/home/jenkins/minikube-integration/17877-1145955/.minikube/machines/no-preload-363849/id_rsa Username:docker}
	I0328 22:07:53.725548 1343785 ssh_runner.go:195] Run: cat /etc/os-release
	I0328 22:07:53.729249 1343785 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0328 22:07:53.729292 1343785 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0328 22:07:53.729303 1343785 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0328 22:07:53.729311 1343785 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0328 22:07:53.729322 1343785 filesync.go:126] Scanning /home/jenkins/minikube-integration/17877-1145955/.minikube/addons for local assets ...
	I0328 22:07:53.729377 1343785 filesync.go:126] Scanning /home/jenkins/minikube-integration/17877-1145955/.minikube/files for local assets ...
	I0328 22:07:53.729465 1343785 filesync.go:149] local asset: /home/jenkins/minikube-integration/17877-1145955/.minikube/files/etc/ssl/certs/11513632.pem -> 11513632.pem in /etc/ssl/certs
	I0328 22:07:53.729583 1343785 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0328 22:07:53.738707 1343785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17877-1145955/.minikube/files/etc/ssl/certs/11513632.pem --> /etc/ssl/certs/11513632.pem (1708 bytes)
	I0328 22:07:53.766165 1343785 start.go:296] duration metric: took 166.154042ms for postStartSetup
	I0328 22:07:53.766269 1343785 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0328 22:07:53.766322 1343785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-363849
	I0328 22:07:53.783044 1343785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34559 SSHKeyPath:/home/jenkins/minikube-integration/17877-1145955/.minikube/machines/no-preload-363849/id_rsa Username:docker}
	I0328 22:07:53.883323 1343785 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0328 22:07:53.898944 1343785 fix.go:56] duration metric: took 5.044753978s for fixHost
	I0328 22:07:53.898972 1343785 start.go:83] releasing machines lock for "no-preload-363849", held for 5.044814647s
	I0328 22:07:53.899110 1343785 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-363849
	I0328 22:07:53.917052 1343785 ssh_runner.go:195] Run: cat /version.json
	I0328 22:07:53.917109 1343785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-363849
	I0328 22:07:53.917358 1343785 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0328 22:07:53.917414 1343785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-363849
	I0328 22:07:53.950893 1343785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34559 SSHKeyPath:/home/jenkins/minikube-integration/17877-1145955/.minikube/machines/no-preload-363849/id_rsa Username:docker}
	I0328 22:07:53.958331 1343785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34559 SSHKeyPath:/home/jenkins/minikube-integration/17877-1145955/.minikube/machines/no-preload-363849/id_rsa Username:docker}
	I0328 22:07:54.195328 1343785 ssh_runner.go:195] Run: systemctl --version
	I0328 22:07:54.199703 1343785 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0328 22:07:54.341099 1343785 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0328 22:07:54.345438 1343785 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0328 22:07:54.354481 1343785 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0328 22:07:54.354558 1343785 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0328 22:07:54.363602 1343785 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0328 22:07:54.363626 1343785 start.go:494] detecting cgroup driver to use...
	I0328 22:07:54.363658 1343785 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0328 22:07:54.363705 1343785 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0328 22:07:54.380246 1343785 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0328 22:07:54.396198 1343785 docker.go:217] disabling cri-docker service (if available) ...
	I0328 22:07:54.396263 1343785 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0328 22:07:54.409776 1343785 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0328 22:07:54.421184 1343785 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0328 22:07:54.515471 1343785 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0328 22:07:54.599970 1343785 docker.go:233] disabling docker service ...
	I0328 22:07:54.600137 1343785 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0328 22:07:54.613036 1343785 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0328 22:07:54.624686 1343785 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0328 22:07:54.721689 1343785 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0328 22:07:54.820705 1343785 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0328 22:07:54.839746 1343785 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 22:07:54.857001 1343785 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0328 22:07:54.857071 1343785 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 22:07:54.866856 1343785 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0328 22:07:54.866935 1343785 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 22:07:54.876456 1343785 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 22:07:54.886446 1343785 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 22:07:54.899767 1343785 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0328 22:07:54.908742 1343785 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 22:07:54.918666 1343785 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 22:07:54.928231 1343785 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0328 22:07:54.938351 1343785 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0328 22:07:54.947353 1343785 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0328 22:07:54.955736 1343785 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 22:07:55.041779 1343785 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0328 22:07:55.160958 1343785 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0328 22:07:55.161032 1343785 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0328 22:07:55.165715 1343785 start.go:562] Will wait 60s for crictl version
	I0328 22:07:55.165804 1343785 ssh_runner.go:195] Run: which crictl
	I0328 22:07:55.170103 1343785 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0328 22:07:55.211461 1343785 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0328 22:07:55.211565 1343785 ssh_runner.go:195] Run: crio --version
	I0328 22:07:55.249768 1343785 ssh_runner.go:195] Run: crio --version
	I0328 22:07:55.293152 1343785 out.go:177] * Preparing Kubernetes v1.30.0-beta.0 on CRI-O 1.24.6 ...
	I0328 22:07:50.891892 1338826 pod_ready.go:102] pod "etcd-old-k8s-version-633693" in "kube-system" namespace has status "Ready":"False"
	I0328 22:07:52.893756 1338826 pod_ready.go:102] pod "etcd-old-k8s-version-633693" in "kube-system" namespace has status "Ready":"False"
	I0328 22:07:55.391654 1338826 pod_ready.go:102] pod "etcd-old-k8s-version-633693" in "kube-system" namespace has status "Ready":"False"
	I0328 22:07:55.294914 1343785 cli_runner.go:164] Run: docker network inspect no-preload-363849 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0328 22:07:55.308457 1343785 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0328 22:07:55.312143 1343785 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 22:07:55.323009 1343785 kubeadm.go:877] updating cluster {Name:no-preload-363849 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-beta.0 ClusterName:no-preload-363849 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/je
nkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0328 22:07:55.323153 1343785 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime crio
	I0328 22:07:55.323203 1343785 ssh_runner.go:195] Run: sudo crictl images --output json
	I0328 22:07:55.376403 1343785 crio.go:514] all images are preloaded for cri-o runtime.
	I0328 22:07:55.376428 1343785 cache_images.go:84] Images are preloaded, skipping loading
	I0328 22:07:55.376436 1343785 kubeadm.go:928] updating node { 192.168.85.2 8443 v1.30.0-beta.0 crio true true} ...
	I0328 22:07:55.376537 1343785 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-363849 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0-beta.0 ClusterName:no-preload-363849 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0328 22:07:55.376623 1343785 ssh_runner.go:195] Run: crio config
	I0328 22:07:55.432846 1343785 cni.go:84] Creating CNI manager for ""
	I0328 22:07:55.432873 1343785 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0328 22:07:55.432889 1343785 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0328 22:07:55.432913 1343785 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.30.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-363849 NodeName:no-preload-363849 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0328 22:07:55.433063 1343785 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-363849"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0328 22:07:55.433146 1343785 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0-beta.0
	I0328 22:07:55.443403 1343785 binaries.go:44] Found k8s binaries, skipping transfer
	I0328 22:07:55.443496 1343785 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0328 22:07:55.452427 1343785 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I0328 22:07:55.471647 1343785 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0328 22:07:55.490185 1343785 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0328 22:07:55.508680 1343785 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0328 22:07:55.512223 1343785 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 22:07:55.523310 1343785 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 22:07:55.613276 1343785 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 22:07:55.627173 1343785 certs.go:68] Setting up /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/no-preload-363849 for IP: 192.168.85.2
	I0328 22:07:55.627239 1343785 certs.go:194] generating shared ca certs ...
	I0328 22:07:55.627296 1343785 certs.go:226] acquiring lock for ca certs: {Name:mk1e4b3d6020f96643d0b806687ddcafb6824b0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 22:07:55.627484 1343785 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17877-1145955/.minikube/ca.key
	I0328 22:07:55.627555 1343785 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17877-1145955/.minikube/proxy-client-ca.key
	I0328 22:07:55.627618 1343785 certs.go:256] generating profile certs ...
	I0328 22:07:55.627761 1343785 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/no-preload-363849/client.key
	I0328 22:07:55.627850 1343785 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/no-preload-363849/apiserver.key.8d62a42c
	I0328 22:07:55.627935 1343785 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/no-preload-363849/proxy-client.key
	I0328 22:07:55.628078 1343785 certs.go:484] found cert: /home/jenkins/minikube-integration/17877-1145955/.minikube/certs/1151363.pem (1338 bytes)
	W0328 22:07:55.628190 1343785 certs.go:480] ignoring /home/jenkins/minikube-integration/17877-1145955/.minikube/certs/1151363_empty.pem, impossibly tiny 0 bytes
	I0328 22:07:55.628228 1343785 certs.go:484] found cert: /home/jenkins/minikube-integration/17877-1145955/.minikube/certs/ca-key.pem (1679 bytes)
	I0328 22:07:55.628274 1343785 certs.go:484] found cert: /home/jenkins/minikube-integration/17877-1145955/.minikube/certs/ca.pem (1082 bytes)
	I0328 22:07:55.628337 1343785 certs.go:484] found cert: /home/jenkins/minikube-integration/17877-1145955/.minikube/certs/cert.pem (1123 bytes)
	I0328 22:07:55.628389 1343785 certs.go:484] found cert: /home/jenkins/minikube-integration/17877-1145955/.minikube/certs/key.pem (1679 bytes)
	I0328 22:07:55.628479 1343785 certs.go:484] found cert: /home/jenkins/minikube-integration/17877-1145955/.minikube/files/etc/ssl/certs/11513632.pem (1708 bytes)
	I0328 22:07:55.629418 1343785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17877-1145955/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0328 22:07:55.656776 1343785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17877-1145955/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0328 22:07:55.694519 1343785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17877-1145955/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0328 22:07:55.732617 1343785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17877-1145955/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0328 22:07:55.759187 1343785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/no-preload-363849/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0328 22:07:55.818372 1343785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/no-preload-363849/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0328 22:07:55.858446 1343785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/no-preload-363849/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0328 22:07:55.890894 1343785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/no-preload-363849/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0328 22:07:55.917568 1343785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17877-1145955/.minikube/files/etc/ssl/certs/11513632.pem --> /usr/share/ca-certificates/11513632.pem (1708 bytes)
	I0328 22:07:55.942473 1343785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17877-1145955/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0328 22:07:55.967358 1343785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17877-1145955/.minikube/certs/1151363.pem --> /usr/share/ca-certificates/1151363.pem (1338 bytes)
	I0328 22:07:55.993664 1343785 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0328 22:07:56.015382 1343785 ssh_runner.go:195] Run: openssl version
	I0328 22:07:56.022693 1343785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11513632.pem && ln -fs /usr/share/ca-certificates/11513632.pem /etc/ssl/certs/11513632.pem"
	I0328 22:07:56.033305 1343785 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11513632.pem
	I0328 22:07:56.036793 1343785 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 28 21:20 /usr/share/ca-certificates/11513632.pem
	I0328 22:07:56.036873 1343785 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11513632.pem
	I0328 22:07:56.043726 1343785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11513632.pem /etc/ssl/certs/3ec20f2e.0"
	I0328 22:07:56.053451 1343785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0328 22:07:56.062983 1343785 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0328 22:07:56.066657 1343785 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 28 21:12 /usr/share/ca-certificates/minikubeCA.pem
	I0328 22:07:56.066794 1343785 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0328 22:07:56.074313 1343785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0328 22:07:56.085299 1343785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1151363.pem && ln -fs /usr/share/ca-certificates/1151363.pem /etc/ssl/certs/1151363.pem"
	I0328 22:07:56.095052 1343785 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1151363.pem
	I0328 22:07:56.098592 1343785 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 28 21:20 /usr/share/ca-certificates/1151363.pem
	I0328 22:07:56.098657 1343785 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1151363.pem
	I0328 22:07:56.105920 1343785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1151363.pem /etc/ssl/certs/51391683.0"
	I0328 22:07:56.116437 1343785 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0328 22:07:56.119824 1343785 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0328 22:07:56.126737 1343785 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0328 22:07:56.134027 1343785 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0328 22:07:56.141130 1343785 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0328 22:07:56.147932 1343785 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0328 22:07:56.155528 1343785 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0328 22:07:56.162411 1343785 kubeadm.go:391] StartCluster: {Name:no-preload-363849 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-beta.0 ClusterName:no-preload-363849 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenki
ns:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 22:07:56.162508 1343785 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0328 22:07:56.162578 1343785 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0328 22:07:56.211566 1343785 cri.go:89] found id: ""
	I0328 22:07:56.211709 1343785 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0328 22:07:56.222645 1343785 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0328 22:07:56.222720 1343785 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0328 22:07:56.222754 1343785 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0328 22:07:56.222845 1343785 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0328 22:07:56.235882 1343785 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0328 22:07:56.236662 1343785 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-363849" does not appear in /home/jenkins/minikube-integration/17877-1145955/kubeconfig
	I0328 22:07:56.237002 1343785 kubeconfig.go:62] /home/jenkins/minikube-integration/17877-1145955/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-363849" cluster setting kubeconfig missing "no-preload-363849" context setting]
	I0328 22:07:56.237568 1343785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17877-1145955/kubeconfig: {Name:mk01de9100d65131f49674a0d1051891ca674cd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 22:07:56.239285 1343785 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0328 22:07:56.251230 1343785 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.85.2
	I0328 22:07:56.251308 1343785 kubeadm.go:591] duration metric: took 28.529214ms to restartPrimaryControlPlane
	I0328 22:07:56.251330 1343785 kubeadm.go:393] duration metric: took 88.927944ms to StartCluster
	I0328 22:07:56.251385 1343785 settings.go:142] acquiring lock: {Name:mka22e5d6cd66b2677ac3cce373c1a6e13c189c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 22:07:56.251474 1343785 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17877-1145955/kubeconfig
	I0328 22:07:56.252580 1343785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17877-1145955/kubeconfig: {Name:mk01de9100d65131f49674a0d1051891ca674cd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 22:07:56.252853 1343785 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0328 22:07:56.259968 1343785 out.go:177] * Verifying Kubernetes components...
	I0328 22:07:56.253446 1343785 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0328 22:07:56.253642 1343785 config.go:182] Loaded profile config "no-preload-363849": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0-beta.0
	I0328 22:07:56.262777 1343785 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 22:07:56.262813 1343785 addons.go:69] Setting storage-provisioner=true in profile "no-preload-363849"
	I0328 22:07:56.263026 1343785 addons.go:234] Setting addon storage-provisioner=true in "no-preload-363849"
	W0328 22:07:56.263039 1343785 addons.go:243] addon storage-provisioner should already be in state true
	I0328 22:07:56.263066 1343785 host.go:66] Checking if "no-preload-363849" exists ...
	I0328 22:07:56.263510 1343785 cli_runner.go:164] Run: docker container inspect no-preload-363849 --format={{.State.Status}}
	I0328 22:07:56.262822 1343785 addons.go:69] Setting dashboard=true in profile "no-preload-363849"
	I0328 22:07:56.263668 1343785 addons.go:234] Setting addon dashboard=true in "no-preload-363849"
	W0328 22:07:56.263680 1343785 addons.go:243] addon dashboard should already be in state true
	I0328 22:07:56.263700 1343785 host.go:66] Checking if "no-preload-363849" exists ...
	I0328 22:07:56.264064 1343785 cli_runner.go:164] Run: docker container inspect no-preload-363849 --format={{.State.Status}}
	I0328 22:07:56.262829 1343785 addons.go:69] Setting default-storageclass=true in profile "no-preload-363849"
	I0328 22:07:56.266653 1343785 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-363849"
	I0328 22:07:56.267042 1343785 cli_runner.go:164] Run: docker container inspect no-preload-363849 --format={{.State.Status}}
	I0328 22:07:56.262835 1343785 addons.go:69] Setting metrics-server=true in profile "no-preload-363849"
	I0328 22:07:56.268242 1343785 addons.go:234] Setting addon metrics-server=true in "no-preload-363849"
	W0328 22:07:56.268269 1343785 addons.go:243] addon metrics-server should already be in state true
	I0328 22:07:56.268304 1343785 host.go:66] Checking if "no-preload-363849" exists ...
	I0328 22:07:56.268922 1343785 cli_runner.go:164] Run: docker container inspect no-preload-363849 --format={{.State.Status}}
	I0328 22:07:56.356692 1343785 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0328 22:07:56.361252 1343785 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0328 22:07:56.356629 1343785 addons.go:234] Setting addon default-storageclass=true in "no-preload-363849"
	I0328 22:07:56.363996 1343785 addons.go:426] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0328 22:07:56.364996 1343785 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0328 22:07:56.365007 1343785 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 22:07:56.366967 1343785 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0328 22:07:56.364997 1343785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0328 22:07:56.366992 1343785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0328 22:07:56.367055 1343785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-363849
	I0328 22:07:56.367071 1343785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-363849
	I0328 22:07:56.374647 1343785 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 22:07:56.374671 1343785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0328 22:07:56.374736 1343785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-363849
	W0328 22:07:56.364010 1343785 addons.go:243] addon default-storageclass should already be in state true
	I0328 22:07:56.377707 1343785 host.go:66] Checking if "no-preload-363849" exists ...
	I0328 22:07:56.378157 1343785 cli_runner.go:164] Run: docker container inspect no-preload-363849 --format={{.State.Status}}
	I0328 22:07:56.400077 1343785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34559 SSHKeyPath:/home/jenkins/minikube-integration/17877-1145955/.minikube/machines/no-preload-363849/id_rsa Username:docker}
	I0328 22:07:56.413221 1343785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34559 SSHKeyPath:/home/jenkins/minikube-integration/17877-1145955/.minikube/machines/no-preload-363849/id_rsa Username:docker}
	I0328 22:07:56.443407 1343785 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0328 22:07:56.443427 1343785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0328 22:07:56.443489 1343785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-363849
	I0328 22:07:56.444743 1343785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34559 SSHKeyPath:/home/jenkins/minikube-integration/17877-1145955/.minikube/machines/no-preload-363849/id_rsa Username:docker}
	I0328 22:07:56.472393 1343785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34559 SSHKeyPath:/home/jenkins/minikube-integration/17877-1145955/.minikube/machines/no-preload-363849/id_rsa Username:docker}
	I0328 22:07:56.603058 1343785 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 22:07:56.623005 1343785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 22:07:56.631866 1343785 node_ready.go:35] waiting up to 6m0s for node "no-preload-363849" to be "Ready" ...
	I0328 22:07:56.701924 1343785 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0328 22:07:56.701952 1343785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0328 22:07:56.708326 1343785 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0328 22:07:56.708348 1343785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0328 22:07:56.716558 1343785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0328 22:07:56.776499 1343785 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0328 22:07:56.776563 1343785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0328 22:07:56.791089 1343785 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0328 22:07:56.791116 1343785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0328 22:07:56.906688 1343785 addons.go:426] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0328 22:07:56.906753 1343785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	W0328 22:07:56.913260 1343785 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0328 22:07:56.913330 1343785 retry.go:31] will retry after 151.649039ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0328 22:07:56.923131 1343785 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 22:07:56.923195 1343785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0328 22:07:56.964192 1343785 addons.go:426] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0328 22:07:56.964231 1343785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0328 22:07:56.982037 1343785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0328 22:07:57.001821 1343785 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0328 22:07:57.001861 1343785 retry.go:31] will retry after 307.685852ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0328 22:07:57.032382 1343785 addons.go:426] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0328 22:07:57.032413 1343785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0328 22:07:57.065710 1343785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 22:07:57.085247 1343785 addons.go:426] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0328 22:07:57.085275 1343785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	W0328 22:07:57.120033 1343785 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0328 22:07:57.120101 1343785 retry.go:31] will retry after 241.303163ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0328 22:07:57.136860 1343785 addons.go:426] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0328 22:07:57.136888 1343785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0328 22:07:57.185942 1343785 addons.go:426] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0328 22:07:57.185968 1343785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0328 22:07:57.218032 1343785 addons.go:426] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0328 22:07:57.218059 1343785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0328 22:07:57.238836 1343785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0328 22:07:57.310060 1343785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0328 22:07:57.362355 1343785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0328 22:07:57.393698 1338826 pod_ready.go:102] pod "etcd-old-k8s-version-633693" in "kube-system" namespace has status "Ready":"False"
	I0328 22:07:59.891612 1338826 pod_ready.go:102] pod "etcd-old-k8s-version-633693" in "kube-system" namespace has status "Ready":"False"
	I0328 22:08:02.149786 1343785 node_ready.go:49] node "no-preload-363849" has status "Ready":"True"
	I0328 22:08:02.149820 1343785 node_ready.go:38] duration metric: took 5.517880837s for node "no-preload-363849" to be "Ready" ...
	I0328 22:08:02.149832 1343785 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 22:08:02.601578 1343785 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-58zbh" in "kube-system" namespace to be "Ready" ...
	I0328 22:08:02.839993 1343785 pod_ready.go:92] pod "coredns-7db6d8ff4d-58zbh" in "kube-system" namespace has status "Ready":"True"
	I0328 22:08:02.840028 1343785 pod_ready.go:81] duration metric: took 238.412952ms for pod "coredns-7db6d8ff4d-58zbh" in "kube-system" namespace to be "Ready" ...
	I0328 22:08:02.840045 1343785 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-363849" in "kube-system" namespace to be "Ready" ...
	I0328 22:08:03.105364 1343785 pod_ready.go:92] pod "etcd-no-preload-363849" in "kube-system" namespace has status "Ready":"True"
	I0328 22:08:03.105398 1343785 pod_ready.go:81] duration metric: took 265.344195ms for pod "etcd-no-preload-363849" in "kube-system" namespace to be "Ready" ...
	I0328 22:08:03.105416 1343785 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-363849" in "kube-system" namespace to be "Ready" ...
	I0328 22:08:03.756256 1343785 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.690496709s)
	I0328 22:08:04.049993 1343785 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (6.739886052s)
	I0328 22:08:04.050225 1343785 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.811346395s)
	I0328 22:08:04.050359 1343785 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.687969579s)
	I0328 22:08:04.050385 1343785 addons.go:470] Verifying addon metrics-server=true in "no-preload-363849"
	I0328 22:08:04.052343 1343785 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-363849 addons enable metrics-server
	
	I0328 22:08:04.057216 1343785 out.go:177] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	I0328 22:08:02.393850 1338826 pod_ready.go:102] pod "etcd-old-k8s-version-633693" in "kube-system" namespace has status "Ready":"False"
	I0328 22:08:04.892201 1338826 pod_ready.go:92] pod "etcd-old-k8s-version-633693" in "kube-system" namespace has status "Ready":"True"
	I0328 22:08:04.892227 1338826 pod_ready.go:81] duration metric: took 1m16.006325351s for pod "etcd-old-k8s-version-633693" in "kube-system" namespace to be "Ready" ...
	I0328 22:08:04.892259 1338826 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-633693" in "kube-system" namespace to be "Ready" ...
	I0328 22:08:04.897466 1338826 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-633693" in "kube-system" namespace has status "Ready":"True"
	I0328 22:08:04.897491 1338826 pod_ready.go:81] duration metric: took 5.218747ms for pod "kube-apiserver-old-k8s-version-633693" in "kube-system" namespace to be "Ready" ...
	I0328 22:08:04.897503 1338826 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-633693" in "kube-system" namespace to be "Ready" ...
	I0328 22:08:04.059415 1343785 addons.go:505] duration metric: took 7.80597283s for enable addons: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
	I0328 22:08:05.112621 1343785 pod_ready.go:102] pod "kube-apiserver-no-preload-363849" in "kube-system" namespace has status "Ready":"False"
	I0328 22:08:07.611848 1343785 pod_ready.go:102] pod "kube-apiserver-no-preload-363849" in "kube-system" namespace has status "Ready":"False"
	I0328 22:08:06.904157 1338826 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-633693" in "kube-system" namespace has status "Ready":"False"
	I0328 22:08:09.403663 1338826 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-633693" in "kube-system" namespace has status "Ready":"False"
	I0328 22:08:10.112775 1343785 pod_ready.go:102] pod "kube-apiserver-no-preload-363849" in "kube-system" namespace has status "Ready":"False"
	I0328 22:08:12.111943 1343785 pod_ready.go:92] pod "kube-apiserver-no-preload-363849" in "kube-system" namespace has status "Ready":"True"
	I0328 22:08:12.111967 1343785 pod_ready.go:81] duration metric: took 9.006543407s for pod "kube-apiserver-no-preload-363849" in "kube-system" namespace to be "Ready" ...
	I0328 22:08:12.111978 1343785 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-363849" in "kube-system" namespace to be "Ready" ...
	I0328 22:08:12.118129 1343785 pod_ready.go:92] pod "kube-controller-manager-no-preload-363849" in "kube-system" namespace has status "Ready":"True"
	I0328 22:08:12.118157 1343785 pod_ready.go:81] duration metric: took 6.17042ms for pod "kube-controller-manager-no-preload-363849" in "kube-system" namespace to be "Ready" ...
	I0328 22:08:12.118171 1343785 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-c2fkg" in "kube-system" namespace to be "Ready" ...
	I0328 22:08:12.123666 1343785 pod_ready.go:92] pod "kube-proxy-c2fkg" in "kube-system" namespace has status "Ready":"True"
	I0328 22:08:12.123693 1343785 pod_ready.go:81] duration metric: took 5.514893ms for pod "kube-proxy-c2fkg" in "kube-system" namespace to be "Ready" ...
	I0328 22:08:12.123706 1343785 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-363849" in "kube-system" namespace to be "Ready" ...
	I0328 22:08:12.137241 1343785 pod_ready.go:92] pod "kube-scheduler-no-preload-363849" in "kube-system" namespace has status "Ready":"True"
	I0328 22:08:12.137265 1343785 pod_ready.go:81] duration metric: took 13.551188ms for pod "kube-scheduler-no-preload-363849" in "kube-system" namespace to be "Ready" ...
	I0328 22:08:12.137277 1343785 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace to be "Ready" ...
	I0328 22:08:11.405298 1338826 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-633693" in "kube-system" namespace has status "Ready":"False"
	I0328 22:08:13.904187 1338826 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-633693" in "kube-system" namespace has status "Ready":"False"
	I0328 22:08:14.144059 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:08:16.643651 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:08:16.403711 1338826 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-633693" in "kube-system" namespace has status "Ready":"False"
	I0328 22:08:17.909294 1338826 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-633693" in "kube-system" namespace has status "Ready":"True"
	I0328 22:08:17.909370 1338826 pod_ready.go:81] duration metric: took 13.011857211s for pod "kube-controller-manager-old-k8s-version-633693" in "kube-system" namespace to be "Ready" ...
	I0328 22:08:17.909397 1338826 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9vs8r" in "kube-system" namespace to be "Ready" ...
	I0328 22:08:17.918637 1338826 pod_ready.go:92] pod "kube-proxy-9vs8r" in "kube-system" namespace has status "Ready":"True"
	I0328 22:08:17.918672 1338826 pod_ready.go:81] duration metric: took 9.258655ms for pod "kube-proxy-9vs8r" in "kube-system" namespace to be "Ready" ...
	I0328 22:08:17.918685 1338826 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-633693" in "kube-system" namespace to be "Ready" ...
	I0328 22:08:17.927105 1338826 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-633693" in "kube-system" namespace has status "Ready":"True"
	I0328 22:08:17.927130 1338826 pod_ready.go:81] duration metric: took 8.437573ms for pod "kube-scheduler-old-k8s-version-633693" in "kube-system" namespace to be "Ready" ...
	I0328 22:08:17.927142 1338826 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace to be "Ready" ...
	I0328 22:08:19.942294 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:08:19.143418 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:08:21.144771 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:08:22.434712 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:08:24.435112 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:08:23.650528 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:08:26.144807 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:08:26.933676 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:08:28.935233 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:08:28.643298 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:08:30.643343 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:08:33.142890 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:08:31.434711 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:08:33.932938 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:08:35.144472 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:08:37.646250 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:08:35.933182 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:08:37.933531 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:08:40.434985 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:08:40.149566 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:08:42.643175 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:08:42.932957 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:08:44.933746 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:08:44.643416 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:08:46.643671 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:08:47.432638 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:08:49.433249 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:08:49.143851 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:08:51.144359 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:08:51.433961 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:08:53.434580 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:08:53.644020 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:08:56.144300 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:08:58.145689 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:08:55.932722 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:08:57.933867 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:00.434560 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:00.161198 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:02.643446 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:02.932551 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:04.932913 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:05.144438 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:07.643527 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:07.433550 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:09.433637 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:10.143586 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:12.144340 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:11.933587 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:14.434340 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:14.155912 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:16.643503 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:16.434764 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:18.932404 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:18.643897 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:21.144159 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:23.144703 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:20.933066 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:22.938118 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:25.432907 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:25.643119 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:27.644236 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:27.433591 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:29.933701 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:30.145427 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:32.645092 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:32.433501 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:34.932827 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:35.144156 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:37.643526 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:36.934724 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:39.433686 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:40.144030 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:42.146006 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:41.434345 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:43.434422 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:44.643631 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:47.152838 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:45.932966 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:47.933007 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:49.933857 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:49.643780 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:52.143925 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:52.433046 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:54.434259 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:54.643593 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:57.144203 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:56.933632 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:58.933976 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:09:59.643009 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:10:01.644333 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:10:01.434400 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:10:03.434444 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:10:04.143341 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:10:06.147812 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:10:05.932668 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:10:07.934860 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:10:09.935369 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:10:08.643466 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:10:10.643927 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:10:12.644512 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:10:12.434151 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:10:14.933434 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:10:15.143703 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:10:17.143976 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:10:16.934278 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:10:19.434252 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:10:19.643434 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:10:22.143905 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:10:21.434530 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:10:23.933010 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:10:24.643574 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:10:26.643657 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:10:26.433027 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:10:28.433948 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:10:29.143718 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:10:31.643754 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:10:30.933738 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:10:32.933910 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:10:35.432932 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:10:34.143981 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:10:36.643479 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:10:37.434118 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:10:39.441245 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:10:39.143486 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:10:41.143865 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:10:41.932963 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:10:43.933931 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:10:43.643481 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:10:46.143296 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:10:48.144234 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:10:46.434034 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:10:48.933457 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:10:50.643518 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:10:52.643870 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:10:51.432895 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:10:53.434727 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:10:55.440622 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:10:55.145055 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:10:57.643473 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:10:57.933693 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:00.435477 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:00.176261 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:02.643711 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:02.932933 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:04.933833 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:05.143659 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:07.643313 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:07.433346 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:09.433437 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:09.643536 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:11.644359 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:11.933095 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:13.933603 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:14.143848 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:16.643114 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:16.433215 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:18.933087 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:19.143585 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:21.144363 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:20.933409 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:23.434532 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:25.434708 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:23.643590 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:26.142875 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:27.932941 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:29.933456 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:28.642795 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:30.643048 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:32.643408 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:31.933872 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:34.433180 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:34.644530 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:37.143975 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:36.433308 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:38.433890 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:40.434164 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:39.643734 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:42.144506 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:42.932888 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:44.932980 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:44.643656 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:47.143743 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:46.933414 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:49.432985 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:49.144185 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:51.644630 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:51.433663 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:53.434707 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:54.144272 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:56.643147 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:55.933135 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:57.933821 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:12:00.434788 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:11:58.643245 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:12:00.644232 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:12:03.142946 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:12:02.933354 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:12:04.934434 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:12:05.143851 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:12:07.144042 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:12:07.433060 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:12:09.433317 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:12:09.144261 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:12:11.643843 1343785 pod_ready.go:102] pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace has status "Ready":"False"
	I0328 22:12:12.143791 1343785 pod_ready.go:81] duration metric: took 4m0.006499541s for pod "metrics-server-569cc877fc-pttn4" in "kube-system" namespace to be "Ready" ...
	E0328 22:12:12.143820 1343785 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0328 22:12:12.143830 1343785 pod_ready.go:38] duration metric: took 4m9.993988098s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 22:12:12.143845 1343785 api_server.go:52] waiting for apiserver process to appear ...
	I0328 22:12:12.143876 1343785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 22:12:12.143943 1343785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 22:12:12.202036 1343785 cri.go:89] found id: "2802d01750454277c58463803cc59154ce9ed35544da7d3b40387123a0807b37"
	I0328 22:12:12.202062 1343785 cri.go:89] found id: ""
	I0328 22:12:12.202070 1343785 logs.go:276] 1 containers: [2802d01750454277c58463803cc59154ce9ed35544da7d3b40387123a0807b37]
	I0328 22:12:12.202136 1343785 ssh_runner.go:195] Run: which crictl
	I0328 22:12:12.207197 1343785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 22:12:12.207271 1343785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 22:12:12.256751 1343785 cri.go:89] found id: "77a89f4b79b5e13c02662c8ba6f6f3a922d205bf39b5c63ab0720b597e5ad2f4"
	I0328 22:12:12.256779 1343785 cri.go:89] found id: ""
	I0328 22:12:12.256788 1343785 logs.go:276] 1 containers: [77a89f4b79b5e13c02662c8ba6f6f3a922d205bf39b5c63ab0720b597e5ad2f4]
	I0328 22:12:12.256845 1343785 ssh_runner.go:195] Run: which crictl
	I0328 22:12:12.260599 1343785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 22:12:12.260677 1343785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 22:12:12.300601 1343785 cri.go:89] found id: "ad1c78378fd080073e6d7b4adcb4ebcd3512cdf786702dc1c9708c29fd3c43b7"
	I0328 22:12:12.300625 1343785 cri.go:89] found id: ""
	I0328 22:12:12.300634 1343785 logs.go:276] 1 containers: [ad1c78378fd080073e6d7b4adcb4ebcd3512cdf786702dc1c9708c29fd3c43b7]
	I0328 22:12:12.300703 1343785 ssh_runner.go:195] Run: which crictl
	I0328 22:12:12.304280 1343785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 22:12:12.304359 1343785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 22:12:12.344780 1343785 cri.go:89] found id: "c0438a8c8a7c6ee24b7206071fa6b63dd553fbf14f4fd558528c57917097cde5"
	I0328 22:12:12.344852 1343785 cri.go:89] found id: ""
	I0328 22:12:12.344867 1343785 logs.go:276] 1 containers: [c0438a8c8a7c6ee24b7206071fa6b63dd553fbf14f4fd558528c57917097cde5]
	I0328 22:12:12.344942 1343785 ssh_runner.go:195] Run: which crictl
	I0328 22:12:12.349071 1343785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 22:12:12.349208 1343785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 22:12:12.393558 1343785 cri.go:89] found id: "02c2f0a0149bac38e3cc7ba146663278ece69b0c2c72593414ea1aa8114deb43"
	I0328 22:12:12.393578 1343785 cri.go:89] found id: ""
	I0328 22:12:12.393586 1343785 logs.go:276] 1 containers: [02c2f0a0149bac38e3cc7ba146663278ece69b0c2c72593414ea1aa8114deb43]
	I0328 22:12:12.393642 1343785 ssh_runner.go:195] Run: which crictl
	I0328 22:12:12.397333 1343785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 22:12:12.397413 1343785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 22:12:12.445506 1343785 cri.go:89] found id: "b4dfc8a81959e1e54f5ccd7dbce5e5b79e2e50ea3eb9c6ef60ed4cc3619a0208"
	I0328 22:12:12.445527 1343785 cri.go:89] found id: ""
	I0328 22:12:12.445535 1343785 logs.go:276] 1 containers: [b4dfc8a81959e1e54f5ccd7dbce5e5b79e2e50ea3eb9c6ef60ed4cc3619a0208]
	I0328 22:12:12.445602 1343785 ssh_runner.go:195] Run: which crictl
	I0328 22:12:12.449650 1343785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 22:12:12.449753 1343785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 22:12:12.491692 1343785 cri.go:89] found id: "2629b02a9c3dc6762ccb0b6979179bee48869c9043cf2d708a2c66592fb3c3b5"
	I0328 22:12:12.491713 1343785 cri.go:89] found id: ""
	I0328 22:12:12.491721 1343785 logs.go:276] 1 containers: [2629b02a9c3dc6762ccb0b6979179bee48869c9043cf2d708a2c66592fb3c3b5]
	I0328 22:12:12.491799 1343785 ssh_runner.go:195] Run: which crictl
	I0328 22:12:12.495254 1343785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 22:12:12.495357 1343785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 22:12:12.539841 1343785 cri.go:89] found id: "1964cd4dcb928edc5fb3ff4cd4a8a1b6afca96dbe666d2a6c6308a962a04fe15"
	I0328 22:12:12.539863 1343785 cri.go:89] found id: ""
	I0328 22:12:12.539875 1343785 logs.go:276] 1 containers: [1964cd4dcb928edc5fb3ff4cd4a8a1b6afca96dbe666d2a6c6308a962a04fe15]
	I0328 22:12:12.539951 1343785 ssh_runner.go:195] Run: which crictl
	I0328 22:12:12.543604 1343785 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0328 22:12:12.543673 1343785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0328 22:12:12.583762 1343785 cri.go:89] found id: "4638f483f335cf1696691ad4ba3f3dc515c5e7c24fb50964ab418406966667e2"
	I0328 22:12:12.583834 1343785 cri.go:89] found id: "3bdeb47065970683bd1984cb42566138e7d30672fa0b08df7df8377ceb1c962a"
	I0328 22:12:12.583855 1343785 cri.go:89] found id: ""
	I0328 22:12:12.583878 1343785 logs.go:276] 2 containers: [4638f483f335cf1696691ad4ba3f3dc515c5e7c24fb50964ab418406966667e2 3bdeb47065970683bd1984cb42566138e7d30672fa0b08df7df8377ceb1c962a]
	I0328 22:12:12.583971 1343785 ssh_runner.go:195] Run: which crictl
	I0328 22:12:12.587596 1343785 ssh_runner.go:195] Run: which crictl
	I0328 22:12:12.590988 1343785 logs.go:123] Gathering logs for CRI-O ...
	I0328 22:12:12.591013 1343785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 22:12:12.672553 1343785 logs.go:123] Gathering logs for kube-apiserver [2802d01750454277c58463803cc59154ce9ed35544da7d3b40387123a0807b37] ...
	I0328 22:12:12.672591 1343785 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2802d01750454277c58463803cc59154ce9ed35544da7d3b40387123a0807b37"
	I0328 22:12:12.724719 1343785 logs.go:123] Gathering logs for coredns [ad1c78378fd080073e6d7b4adcb4ebcd3512cdf786702dc1c9708c29fd3c43b7] ...
	I0328 22:12:12.724752 1343785 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ad1c78378fd080073e6d7b4adcb4ebcd3512cdf786702dc1c9708c29fd3c43b7"
	I0328 22:12:12.775723 1343785 logs.go:123] Gathering logs for kube-scheduler [c0438a8c8a7c6ee24b7206071fa6b63dd553fbf14f4fd558528c57917097cde5] ...
	I0328 22:12:12.775750 1343785 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0438a8c8a7c6ee24b7206071fa6b63dd553fbf14f4fd558528c57917097cde5"
	I0328 22:12:12.822037 1343785 logs.go:123] Gathering logs for kube-controller-manager [b4dfc8a81959e1e54f5ccd7dbce5e5b79e2e50ea3eb9c6ef60ed4cc3619a0208] ...
	I0328 22:12:12.822064 1343785 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4dfc8a81959e1e54f5ccd7dbce5e5b79e2e50ea3eb9c6ef60ed4cc3619a0208"
	I0328 22:12:12.944698 1343785 logs.go:123] Gathering logs for storage-provisioner [3bdeb47065970683bd1984cb42566138e7d30672fa0b08df7df8377ceb1c962a] ...
	I0328 22:12:12.944737 1343785 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bdeb47065970683bd1984cb42566138e7d30672fa0b08df7df8377ceb1c962a"
	I0328 22:12:12.982978 1343785 logs.go:123] Gathering logs for storage-provisioner [4638f483f335cf1696691ad4ba3f3dc515c5e7c24fb50964ab418406966667e2] ...
	I0328 22:12:12.983011 1343785 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4638f483f335cf1696691ad4ba3f3dc515c5e7c24fb50964ab418406966667e2"
	I0328 22:12:13.025340 1343785 logs.go:123] Gathering logs for dmesg ...
	I0328 22:12:13.025367 1343785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 22:12:13.049286 1343785 logs.go:123] Gathering logs for describe nodes ...
	I0328 22:12:13.049322 1343785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 22:12:13.207531 1343785 logs.go:123] Gathering logs for etcd [77a89f4b79b5e13c02662c8ba6f6f3a922d205bf39b5c63ab0720b597e5ad2f4] ...
	I0328 22:12:13.207606 1343785 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77a89f4b79b5e13c02662c8ba6f6f3a922d205bf39b5c63ab0720b597e5ad2f4"
	I0328 22:12:13.265231 1343785 logs.go:123] Gathering logs for kube-proxy [02c2f0a0149bac38e3cc7ba146663278ece69b0c2c72593414ea1aa8114deb43] ...
	I0328 22:12:13.265264 1343785 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02c2f0a0149bac38e3cc7ba146663278ece69b0c2c72593414ea1aa8114deb43"
	I0328 22:12:13.303419 1343785 logs.go:123] Gathering logs for kubernetes-dashboard [1964cd4dcb928edc5fb3ff4cd4a8a1b6afca96dbe666d2a6c6308a962a04fe15] ...
	I0328 22:12:13.303449 1343785 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1964cd4dcb928edc5fb3ff4cd4a8a1b6afca96dbe666d2a6c6308a962a04fe15"
	I0328 22:12:13.349553 1343785 logs.go:123] Gathering logs for kubelet ...
	I0328 22:12:13.349584 1343785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0328 22:12:13.381587 1343785 logs.go:138] Found kubelet problem: Mar 28 22:08:17 no-preload-363849 kubelet[750]: W0328 22:08:17.200950     750 reflector.go:547] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-363849" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-363849' and this object
	W0328 22:12:13.381824 1343785 logs.go:138] Found kubelet problem: Mar 28 22:08:17 no-preload-363849 kubelet[750]: E0328 22:08:17.201000     750 reflector.go:150] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-363849" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-363849' and this object
	I0328 22:12:13.426377 1343785 logs.go:123] Gathering logs for container status ...
	I0328 22:12:13.426416 1343785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 22:12:13.481053 1343785 logs.go:123] Gathering logs for kindnet [2629b02a9c3dc6762ccb0b6979179bee48869c9043cf2d708a2c66592fb3c3b5] ...
	I0328 22:12:13.481084 1343785 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2629b02a9c3dc6762ccb0b6979179bee48869c9043cf2d708a2c66592fb3c3b5"
	I0328 22:12:13.522348 1343785 out.go:304] Setting ErrFile to fd 2...
	I0328 22:12:13.522383 1343785 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0328 22:12:13.522442 1343785 out.go:239] X Problems detected in kubelet:
	W0328 22:12:13.522460 1343785 out.go:239]   Mar 28 22:08:17 no-preload-363849 kubelet[750]: W0328 22:08:17.200950     750 reflector.go:547] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-363849" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-363849' and this object
	W0328 22:12:13.522471 1343785 out.go:239]   Mar 28 22:08:17 no-preload-363849 kubelet[750]: E0328 22:08:17.201000     750 reflector.go:150] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-363849" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-363849' and this object
	I0328 22:12:13.522480 1343785 out.go:304] Setting ErrFile to fd 2...
	I0328 22:12:13.522489 1343785 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 22:12:11.433605 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:12:13.437226 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:12:15.933636 1338826 pod_ready.go:102] pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace has status "Ready":"False"
	I0328 22:12:17.933340 1338826 pod_ready.go:81] duration metric: took 4m0.006183924s for pod "metrics-server-9975d5f86-h5ts8" in "kube-system" namespace to be "Ready" ...
	E0328 22:12:17.933366 1338826 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0328 22:12:17.933376 1338826 pod_ready.go:38] duration metric: took 5m30.183327024s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 22:12:17.933390 1338826 api_server.go:52] waiting for apiserver process to appear ...
	I0328 22:12:17.933418 1338826 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 22:12:17.933480 1338826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 22:12:17.975520 1338826 cri.go:89] found id: "aafa2b2860b7d02f5f63f8f79e578efc4b8a612e8845f3c01b98f89c881a05f5"
	I0328 22:12:17.975543 1338826 cri.go:89] found id: ""
	I0328 22:12:17.975551 1338826 logs.go:276] 1 containers: [aafa2b2860b7d02f5f63f8f79e578efc4b8a612e8845f3c01b98f89c881a05f5]
	I0328 22:12:17.975605 1338826 ssh_runner.go:195] Run: which crictl
	I0328 22:12:17.978975 1338826 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 22:12:17.979046 1338826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 22:12:18.021754 1338826 cri.go:89] found id: "54454ca60825d1d8afde9d3954763066222f962cd282dd4ae5776677f1d02263"
	I0328 22:12:18.021779 1338826 cri.go:89] found id: ""
	I0328 22:12:18.021787 1338826 logs.go:276] 1 containers: [54454ca60825d1d8afde9d3954763066222f962cd282dd4ae5776677f1d02263]
	I0328 22:12:18.021845 1338826 ssh_runner.go:195] Run: which crictl
	I0328 22:12:18.025748 1338826 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 22:12:18.025846 1338826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 22:12:18.069778 1338826 cri.go:89] found id: "9a5cc20f063ad7f3789223916297cb6dbac2af4ce58e82d5bde29f4655036962"
	I0328 22:12:18.069809 1338826 cri.go:89] found id: ""
	I0328 22:12:18.069819 1338826 logs.go:276] 1 containers: [9a5cc20f063ad7f3789223916297cb6dbac2af4ce58e82d5bde29f4655036962]
	I0328 22:12:18.069881 1338826 ssh_runner.go:195] Run: which crictl
	I0328 22:12:18.073970 1338826 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 22:12:18.074054 1338826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 22:12:18.116221 1338826 cri.go:89] found id: "0c1c9dee5f4ce24a9e91181634895b593f2aec0d21a11adbe16ddd1adce82d0b"
	I0328 22:12:18.116246 1338826 cri.go:89] found id: ""
	I0328 22:12:18.116254 1338826 logs.go:276] 1 containers: [0c1c9dee5f4ce24a9e91181634895b593f2aec0d21a11adbe16ddd1adce82d0b]
	I0328 22:12:18.116315 1338826 ssh_runner.go:195] Run: which crictl
	I0328 22:12:18.120360 1338826 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 22:12:18.120438 1338826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 22:12:18.160651 1338826 cri.go:89] found id: "99963dd8c6223acf3495ce9e9c35c7f4c45a97d49ba1435654dd656d82537a95"
	I0328 22:12:18.160672 1338826 cri.go:89] found id: ""
	I0328 22:12:18.160681 1338826 logs.go:276] 1 containers: [99963dd8c6223acf3495ce9e9c35c7f4c45a97d49ba1435654dd656d82537a95]
	I0328 22:12:18.160741 1338826 ssh_runner.go:195] Run: which crictl
	I0328 22:12:18.164797 1338826 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 22:12:18.164870 1338826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 22:12:18.208058 1338826 cri.go:89] found id: "accf5dfc23ea01d34e6a9b748711dcadd8f2af898f72bbc4348b539768308591"
	I0328 22:12:18.208165 1338826 cri.go:89] found id: ""
	I0328 22:12:18.208192 1338826 logs.go:276] 1 containers: [accf5dfc23ea01d34e6a9b748711dcadd8f2af898f72bbc4348b539768308591]
	I0328 22:12:18.208295 1338826 ssh_runner.go:195] Run: which crictl
	I0328 22:12:18.212186 1338826 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 22:12:18.212308 1338826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 22:12:18.257286 1338826 cri.go:89] found id: "f410c25189f5f56842f0d6f5e959b565b53ee6a8d5258db914882d1e38e9ab2b"
	I0328 22:12:18.257353 1338826 cri.go:89] found id: ""
	I0328 22:12:18.257367 1338826 logs.go:276] 1 containers: [f410c25189f5f56842f0d6f5e959b565b53ee6a8d5258db914882d1e38e9ab2b]
	I0328 22:12:18.257425 1338826 ssh_runner.go:195] Run: which crictl
	I0328 22:12:18.263454 1338826 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 22:12:18.263535 1338826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 22:12:18.301918 1338826 cri.go:89] found id: "dcb78e1b7465099ed99741a848314cba3d027ecfab8e44b022dc85231ab9a26d"
	I0328 22:12:18.301941 1338826 cri.go:89] found id: ""
	I0328 22:12:18.301949 1338826 logs.go:276] 1 containers: [dcb78e1b7465099ed99741a848314cba3d027ecfab8e44b022dc85231ab9a26d]
	I0328 22:12:18.302004 1338826 ssh_runner.go:195] Run: which crictl
	I0328 22:12:18.305551 1338826 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0328 22:12:18.305626 1338826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0328 22:12:18.346240 1338826 cri.go:89] found id: "dbebe9c9214b8446dd7e1ba58af2dcdda839029f75a4df5d16fce02680f0f723"
	I0328 22:12:18.346264 1338826 cri.go:89] found id: "de9034878b9e7c8d2e40e141b1c21f2a76586667af994d8465e9601d126955e0"
	I0328 22:12:18.346269 1338826 cri.go:89] found id: ""
	I0328 22:12:18.346277 1338826 logs.go:276] 2 containers: [dbebe9c9214b8446dd7e1ba58af2dcdda839029f75a4df5d16fce02680f0f723 de9034878b9e7c8d2e40e141b1c21f2a76586667af994d8465e9601d126955e0]
	I0328 22:12:18.346333 1338826 ssh_runner.go:195] Run: which crictl
	I0328 22:12:18.349950 1338826 ssh_runner.go:195] Run: which crictl
	I0328 22:12:18.353205 1338826 logs.go:123] Gathering logs for kubelet ...
	I0328 22:12:18.353227 1338826 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0328 22:12:18.405413 1338826 logs.go:138] Found kubelet problem: Mar 28 22:06:47 old-k8s-version-633693 kubelet[742]: E0328 22:06:47.826315     742 reflector.go:138] object-"kube-system"/"storage-provisioner-token-htmqq": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-htmqq" is forbidden: User "system:node:old-k8s-version-633693" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-633693' and this object
	W0328 22:12:18.405657 1338826 logs.go:138] Found kubelet problem: Mar 28 22:06:47 old-k8s-version-633693 kubelet[742]: E0328 22:06:47.826559     742 reflector.go:138] object-"default"/"default-token-skqvg": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-skqvg" is forbidden: User "system:node:old-k8s-version-633693" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-633693' and this object
	W0328 22:12:18.405882 1338826 logs.go:138] Found kubelet problem: Mar 28 22:06:47 old-k8s-version-633693 kubelet[742]: E0328 22:06:47.826733     742 reflector.go:138] object-"kube-system"/"metrics-server-token-qkmwr": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-qkmwr" is forbidden: User "system:node:old-k8s-version-633693" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-633693' and this object
	W0328 22:12:18.406094 1338826 logs.go:138] Found kubelet problem: Mar 28 22:06:47 old-k8s-version-633693 kubelet[742]: E0328 22:06:47.829336     742 reflector.go:138] object-"kube-system"/"coredns-token-zjvsj": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-zjvsj" is forbidden: User "system:node:old-k8s-version-633693" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-633693' and this object
	W0328 22:12:18.406306 1338826 logs.go:138] Found kubelet problem: Mar 28 22:06:47 old-k8s-version-633693 kubelet[742]: E0328 22:06:47.829576     742 reflector.go:138] object-"kube-system"/"kindnet-token-g4wkb": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-g4wkb" is forbidden: User "system:node:old-k8s-version-633693" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-633693' and this object
	W0328 22:12:18.406507 1338826 logs.go:138] Found kubelet problem: Mar 28 22:06:47 old-k8s-version-633693 kubelet[742]: E0328 22:06:47.829784     742 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-633693" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-633693' and this object
	W0328 22:12:18.406719 1338826 logs.go:138] Found kubelet problem: Mar 28 22:06:47 old-k8s-version-633693 kubelet[742]: E0328 22:06:47.829969     742 reflector.go:138] object-"kube-system"/"kube-proxy-token-l4tct": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-l4tct" is forbidden: User "system:node:old-k8s-version-633693" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-633693' and this object
	W0328 22:12:18.406925 1338826 logs.go:138] Found kubelet problem: Mar 28 22:06:47 old-k8s-version-633693 kubelet[742]: E0328 22:06:47.830130     742 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-633693" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-633693' and this object
	W0328 22:12:18.415925 1338826 logs.go:138] Found kubelet problem: Mar 28 22:06:50 old-k8s-version-633693 kubelet[742]: E0328 22:06:50.527663     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0328 22:12:18.416174 1338826 logs.go:138] Found kubelet problem: Mar 28 22:06:50 old-k8s-version-633693 kubelet[742]: E0328 22:06:50.814044     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:18.418221 1338826 logs.go:138] Found kubelet problem: Mar 28 22:07:04 old-k8s-version-633693 kubelet[742]: E0328 22:07:04.699349     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0328 22:12:18.418545 1338826 logs.go:138] Found kubelet problem: Mar 28 22:07:07 old-k8s-version-633693 kubelet[742]: E0328 22:07:07.707146     742 reflector.go:138] object-"kubernetes-dashboard"/"kubernetes-dashboard-token-kddb6": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kubernetes-dashboard-token-kddb6" is forbidden: User "system:node:old-k8s-version-633693" cannot list resource "secrets" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'old-k8s-version-633693' and this object
	W0328 22:12:18.421763 1338826 logs.go:138] Found kubelet problem: Mar 28 22:07:15 old-k8s-version-633693 kubelet[742]: E0328 22:07:15.934579     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:18.422221 1338826 logs.go:138] Found kubelet problem: Mar 28 22:07:16 old-k8s-version-633693 kubelet[742]: E0328 22:07:16.937591     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:18.422406 1338826 logs.go:138] Found kubelet problem: Mar 28 22:07:19 old-k8s-version-633693 kubelet[742]: E0328 22:07:19.733048     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:18.422861 1338826 logs.go:138] Found kubelet problem: Mar 28 22:07:22 old-k8s-version-633693 kubelet[742]: E0328 22:07:22.808225     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:18.424949 1338826 logs.go:138] Found kubelet problem: Mar 28 22:07:34 old-k8s-version-633693 kubelet[742]: E0328 22:07:34.698485     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0328 22:12:18.425546 1338826 logs.go:138] Found kubelet problem: Mar 28 22:07:36 old-k8s-version-633693 kubelet[742]: E0328 22:07:36.977259     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:18.425874 1338826 logs.go:138] Found kubelet problem: Mar 28 22:07:42 old-k8s-version-633693 kubelet[742]: E0328 22:07:42.808747     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:18.426059 1338826 logs.go:138] Found kubelet problem: Mar 28 22:07:48 old-k8s-version-633693 kubelet[742]: E0328 22:07:48.688107     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:18.426384 1338826 logs.go:138] Found kubelet problem: Mar 28 22:07:54 old-k8s-version-633693 kubelet[742]: E0328 22:07:54.687538     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:18.426568 1338826 logs.go:138] Found kubelet problem: Mar 28 22:08:02 old-k8s-version-633693 kubelet[742]: E0328 22:08:02.688287     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:18.427151 1338826 logs.go:138] Found kubelet problem: Mar 28 22:08:09 old-k8s-version-633693 kubelet[742]: E0328 22:08:09.025390     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:18.427477 1338826 logs.go:138] Found kubelet problem: Mar 28 22:08:12 old-k8s-version-633693 kubelet[742]: E0328 22:08:12.808271     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:18.427660 1338826 logs.go:138] Found kubelet problem: Mar 28 22:08:14 old-k8s-version-633693 kubelet[742]: E0328 22:08:14.687911     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:18.429727 1338826 logs.go:138] Found kubelet problem: Mar 28 22:08:25 old-k8s-version-633693 kubelet[742]: E0328 22:08:25.704436     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0328 22:12:18.430054 1338826 logs.go:138] Found kubelet problem: Mar 28 22:08:27 old-k8s-version-633693 kubelet[742]: E0328 22:08:27.687797     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:18.430237 1338826 logs.go:138] Found kubelet problem: Mar 28 22:08:39 old-k8s-version-633693 kubelet[742]: E0328 22:08:39.688581     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:18.430562 1338826 logs.go:138] Found kubelet problem: Mar 28 22:08:41 old-k8s-version-633693 kubelet[742]: E0328 22:08:41.687550     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:18.430787 1338826 logs.go:138] Found kubelet problem: Mar 28 22:08:51 old-k8s-version-633693 kubelet[742]: E0328 22:08:51.689572     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:18.431378 1338826 logs.go:138] Found kubelet problem: Mar 28 22:08:55 old-k8s-version-633693 kubelet[742]: E0328 22:08:55.094326     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:18.431561 1338826 logs.go:138] Found kubelet problem: Mar 28 22:09:02 old-k8s-version-633693 kubelet[742]: E0328 22:09:02.687902     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:18.431886 1338826 logs.go:138] Found kubelet problem: Mar 28 22:09:02 old-k8s-version-633693 kubelet[742]: E0328 22:09:02.808415     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:18.432226 1338826 logs.go:138] Found kubelet problem: Mar 28 22:09:13 old-k8s-version-633693 kubelet[742]: E0328 22:09:13.689276     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:18.432413 1338826 logs.go:138] Found kubelet problem: Mar 28 22:09:14 old-k8s-version-633693 kubelet[742]: E0328 22:09:14.688165     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:18.432741 1338826 logs.go:138] Found kubelet problem: Mar 28 22:09:24 old-k8s-version-633693 kubelet[742]: E0328 22:09:24.687486     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:18.432928 1338826 logs.go:138] Found kubelet problem: Mar 28 22:09:29 old-k8s-version-633693 kubelet[742]: E0328 22:09:29.688477     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:18.433254 1338826 logs.go:138] Found kubelet problem: Mar 28 22:09:35 old-k8s-version-633693 kubelet[742]: E0328 22:09:35.687479     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:18.433952 1338826 logs.go:138] Found kubelet problem: Mar 28 22:09:40 old-k8s-version-633693 kubelet[742]: E0328 22:09:40.688238     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:18.434277 1338826 logs.go:138] Found kubelet problem: Mar 28 22:09:48 old-k8s-version-633693 kubelet[742]: E0328 22:09:48.687433     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:18.436363 1338826 logs.go:138] Found kubelet problem: Mar 28 22:09:51 old-k8s-version-633693 kubelet[742]: E0328 22:09:51.697009     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0328 22:12:18.436691 1338826 logs.go:138] Found kubelet problem: Mar 28 22:10:02 old-k8s-version-633693 kubelet[742]: E0328 22:10:02.687437     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:18.436876 1338826 logs.go:138] Found kubelet problem: Mar 28 22:10:03 old-k8s-version-633693 kubelet[742]: E0328 22:10:03.688247     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:18.437462 1338826 logs.go:138] Found kubelet problem: Mar 28 22:10:16 old-k8s-version-633693 kubelet[742]: E0328 22:10:16.214323     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:18.437646 1338826 logs.go:138] Found kubelet problem: Mar 28 22:10:16 old-k8s-version-633693 kubelet[742]: E0328 22:10:16.688145     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:18.437971 1338826 logs.go:138] Found kubelet problem: Mar 28 22:10:22 old-k8s-version-633693 kubelet[742]: E0328 22:10:22.808176     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:18.438155 1338826 logs.go:138] Found kubelet problem: Mar 28 22:10:29 old-k8s-version-633693 kubelet[742]: E0328 22:10:29.688127     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:18.438480 1338826 logs.go:138] Found kubelet problem: Mar 28 22:10:33 old-k8s-version-633693 kubelet[742]: E0328 22:10:33.687521     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:18.438666 1338826 logs.go:138] Found kubelet problem: Mar 28 22:10:40 old-k8s-version-633693 kubelet[742]: E0328 22:10:40.688295     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:18.438993 1338826 logs.go:138] Found kubelet problem: Mar 28 22:10:46 old-k8s-version-633693 kubelet[742]: E0328 22:10:46.687410     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:18.439177 1338826 logs.go:138] Found kubelet problem: Mar 28 22:10:52 old-k8s-version-633693 kubelet[742]: E0328 22:10:52.688204     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:18.439502 1338826 logs.go:138] Found kubelet problem: Mar 28 22:10:57 old-k8s-version-633693 kubelet[742]: E0328 22:10:57.687778     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:18.439686 1338826 logs.go:138] Found kubelet problem: Mar 28 22:11:07 old-k8s-version-633693 kubelet[742]: E0328 22:11:07.688569     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:18.440010 1338826 logs.go:138] Found kubelet problem: Mar 28 22:11:10 old-k8s-version-633693 kubelet[742]: E0328 22:11:10.687505     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:18.440203 1338826 logs.go:138] Found kubelet problem: Mar 28 22:11:21 old-k8s-version-633693 kubelet[742]: E0328 22:11:21.688528     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:18.440529 1338826 logs.go:138] Found kubelet problem: Mar 28 22:11:25 old-k8s-version-633693 kubelet[742]: E0328 22:11:25.687833     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:18.440714 1338826 logs.go:138] Found kubelet problem: Mar 28 22:11:36 old-k8s-version-633693 kubelet[742]: E0328 22:11:36.687961     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:18.441289 1338826 logs.go:138] Found kubelet problem: Mar 28 22:11:39 old-k8s-version-633693 kubelet[742]: E0328 22:11:39.687565     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:18.441475 1338826 logs.go:138] Found kubelet problem: Mar 28 22:11:51 old-k8s-version-633693 kubelet[742]: E0328 22:11:51.689147     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:18.441800 1338826 logs.go:138] Found kubelet problem: Mar 28 22:11:54 old-k8s-version-633693 kubelet[742]: E0328 22:11:54.687465     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:18.441983 1338826 logs.go:138] Found kubelet problem: Mar 28 22:12:02 old-k8s-version-633693 kubelet[742]: E0328 22:12:02.688202     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:18.442307 1338826 logs.go:138] Found kubelet problem: Mar 28 22:12:06 old-k8s-version-633693 kubelet[742]: E0328 22:12:06.687449     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:18.442634 1338826 logs.go:138] Found kubelet problem: Mar 28 22:12:17 old-k8s-version-633693 kubelet[742]: E0328 22:12:17.687727     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:18.442816 1338826 logs.go:138] Found kubelet problem: Mar 28 22:12:17 old-k8s-version-633693 kubelet[742]: E0328 22:12:17.689382     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0328 22:12:18.442826 1338826 logs.go:123] Gathering logs for describe nodes ...
	I0328 22:12:18.442840 1338826 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 22:12:18.610600 1338826 logs.go:123] Gathering logs for coredns [9a5cc20f063ad7f3789223916297cb6dbac2af4ce58e82d5bde29f4655036962] ...
	I0328 22:12:18.610629 1338826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a5cc20f063ad7f3789223916297cb6dbac2af4ce58e82d5bde29f4655036962"
	I0328 22:12:18.651719 1338826 logs.go:123] Gathering logs for storage-provisioner [de9034878b9e7c8d2e40e141b1c21f2a76586667af994d8465e9601d126955e0] ...
	I0328 22:12:18.651747 1338826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de9034878b9e7c8d2e40e141b1c21f2a76586667af994d8465e9601d126955e0"
	I0328 22:12:18.692899 1338826 logs.go:123] Gathering logs for container status ...
	I0328 22:12:18.692993 1338826 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 22:12:18.756410 1338826 logs.go:123] Gathering logs for kube-apiserver [aafa2b2860b7d02f5f63f8f79e578efc4b8a612e8845f3c01b98f89c881a05f5] ...
	I0328 22:12:18.756441 1338826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aafa2b2860b7d02f5f63f8f79e578efc4b8a612e8845f3c01b98f89c881a05f5"
	I0328 22:12:18.825430 1338826 logs.go:123] Gathering logs for kube-controller-manager [accf5dfc23ea01d34e6a9b748711dcadd8f2af898f72bbc4348b539768308591] ...
	I0328 22:12:18.825468 1338826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 accf5dfc23ea01d34e6a9b748711dcadd8f2af898f72bbc4348b539768308591"
	I0328 22:12:18.915204 1338826 logs.go:123] Gathering logs for kindnet [f410c25189f5f56842f0d6f5e959b565b53ee6a8d5258db914882d1e38e9ab2b] ...
	I0328 22:12:18.915239 1338826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f410c25189f5f56842f0d6f5e959b565b53ee6a8d5258db914882d1e38e9ab2b"
	I0328 22:12:18.970831 1338826 logs.go:123] Gathering logs for kube-proxy [99963dd8c6223acf3495ce9e9c35c7f4c45a97d49ba1435654dd656d82537a95] ...
	I0328 22:12:18.970914 1338826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 99963dd8c6223acf3495ce9e9c35c7f4c45a97d49ba1435654dd656d82537a95"
	I0328 22:12:19.019338 1338826 logs.go:123] Gathering logs for storage-provisioner [dbebe9c9214b8446dd7e1ba58af2dcdda839029f75a4df5d16fce02680f0f723] ...
	I0328 22:12:19.019365 1338826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbebe9c9214b8446dd7e1ba58af2dcdda839029f75a4df5d16fce02680f0f723"
	I0328 22:12:19.064283 1338826 logs.go:123] Gathering logs for kubernetes-dashboard [dcb78e1b7465099ed99741a848314cba3d027ecfab8e44b022dc85231ab9a26d] ...
	I0328 22:12:19.064313 1338826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dcb78e1b7465099ed99741a848314cba3d027ecfab8e44b022dc85231ab9a26d"
	I0328 22:12:19.104471 1338826 logs.go:123] Gathering logs for CRI-O ...
	I0328 22:12:19.104499 1338826 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 22:12:19.184921 1338826 logs.go:123] Gathering logs for dmesg ...
	I0328 22:12:19.184961 1338826 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 22:12:19.203985 1338826 logs.go:123] Gathering logs for etcd [54454ca60825d1d8afde9d3954763066222f962cd282dd4ae5776677f1d02263] ...
	I0328 22:12:19.204137 1338826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54454ca60825d1d8afde9d3954763066222f962cd282dd4ae5776677f1d02263"
	I0328 22:12:19.250334 1338826 logs.go:123] Gathering logs for kube-scheduler [0c1c9dee5f4ce24a9e91181634895b593f2aec0d21a11adbe16ddd1adce82d0b] ...
	I0328 22:12:19.250368 1338826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c1c9dee5f4ce24a9e91181634895b593f2aec0d21a11adbe16ddd1adce82d0b"
	I0328 22:12:19.293923 1338826 out.go:304] Setting ErrFile to fd 2...
	I0328 22:12:19.293952 1338826 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0328 22:12:19.294030 1338826 out.go:239] X Problems detected in kubelet:
	W0328 22:12:19.294045 1338826 out.go:239]   Mar 28 22:11:54 old-k8s-version-633693 kubelet[742]: E0328 22:11:54.687465     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:19.294087 1338826 out.go:239]   Mar 28 22:12:02 old-k8s-version-633693 kubelet[742]: E0328 22:12:02.688202     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:19.294098 1338826 out.go:239]   Mar 28 22:12:06 old-k8s-version-633693 kubelet[742]: E0328 22:12:06.687449     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:19.294105 1338826 out.go:239]   Mar 28 22:12:17 old-k8s-version-633693 kubelet[742]: E0328 22:12:17.687727     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:19.294114 1338826 out.go:239]   Mar 28 22:12:17 old-k8s-version-633693 kubelet[742]: E0328 22:12:17.689382     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0328 22:12:19.294121 1338826 out.go:304] Setting ErrFile to fd 2...
	I0328 22:12:19.294128 1338826 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 22:12:23.523244 1343785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 22:12:23.535238 1343785 api_server.go:72] duration metric: took 4m27.2823195s to wait for apiserver process to appear ...
	I0328 22:12:23.535268 1343785 api_server.go:88] waiting for apiserver healthz status ...
	I0328 22:12:23.535302 1343785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 22:12:23.535366 1343785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 22:12:23.573014 1343785 cri.go:89] found id: "2802d01750454277c58463803cc59154ce9ed35544da7d3b40387123a0807b37"
	I0328 22:12:23.573035 1343785 cri.go:89] found id: ""
	I0328 22:12:23.573043 1343785 logs.go:276] 1 containers: [2802d01750454277c58463803cc59154ce9ed35544da7d3b40387123a0807b37]
	I0328 22:12:23.573101 1343785 ssh_runner.go:195] Run: which crictl
	I0328 22:12:23.576560 1343785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 22:12:23.576641 1343785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 22:12:23.621690 1343785 cri.go:89] found id: "77a89f4b79b5e13c02662c8ba6f6f3a922d205bf39b5c63ab0720b597e5ad2f4"
	I0328 22:12:23.622946 1343785 cri.go:89] found id: ""
	I0328 22:12:23.622957 1343785 logs.go:276] 1 containers: [77a89f4b79b5e13c02662c8ba6f6f3a922d205bf39b5c63ab0720b597e5ad2f4]
	I0328 22:12:23.623023 1343785 ssh_runner.go:195] Run: which crictl
	I0328 22:12:23.626834 1343785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 22:12:23.626906 1343785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 22:12:23.663733 1343785 cri.go:89] found id: "ad1c78378fd080073e6d7b4adcb4ebcd3512cdf786702dc1c9708c29fd3c43b7"
	I0328 22:12:23.663768 1343785 cri.go:89] found id: ""
	I0328 22:12:23.663776 1343785 logs.go:276] 1 containers: [ad1c78378fd080073e6d7b4adcb4ebcd3512cdf786702dc1c9708c29fd3c43b7]
	I0328 22:12:23.663846 1343785 ssh_runner.go:195] Run: which crictl
	I0328 22:12:23.667380 1343785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 22:12:23.667450 1343785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 22:12:23.705506 1343785 cri.go:89] found id: "c0438a8c8a7c6ee24b7206071fa6b63dd553fbf14f4fd558528c57917097cde5"
	I0328 22:12:23.705527 1343785 cri.go:89] found id: ""
	I0328 22:12:23.705535 1343785 logs.go:276] 1 containers: [c0438a8c8a7c6ee24b7206071fa6b63dd553fbf14f4fd558528c57917097cde5]
	I0328 22:12:23.705610 1343785 ssh_runner.go:195] Run: which crictl
	I0328 22:12:23.709096 1343785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 22:12:23.709162 1343785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 22:12:23.749488 1343785 cri.go:89] found id: "02c2f0a0149bac38e3cc7ba146663278ece69b0c2c72593414ea1aa8114deb43"
	I0328 22:12:23.749508 1343785 cri.go:89] found id: ""
	I0328 22:12:23.749516 1343785 logs.go:276] 1 containers: [02c2f0a0149bac38e3cc7ba146663278ece69b0c2c72593414ea1aa8114deb43]
	I0328 22:12:23.749595 1343785 ssh_runner.go:195] Run: which crictl
	I0328 22:12:23.753123 1343785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 22:12:23.753206 1343785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 22:12:23.796746 1343785 cri.go:89] found id: "b4dfc8a81959e1e54f5ccd7dbce5e5b79e2e50ea3eb9c6ef60ed4cc3619a0208"
	I0328 22:12:23.796771 1343785 cri.go:89] found id: ""
	I0328 22:12:23.796779 1343785 logs.go:276] 1 containers: [b4dfc8a81959e1e54f5ccd7dbce5e5b79e2e50ea3eb9c6ef60ed4cc3619a0208]
	I0328 22:12:23.796832 1343785 ssh_runner.go:195] Run: which crictl
	I0328 22:12:23.800250 1343785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 22:12:23.800325 1343785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 22:12:23.842130 1343785 cri.go:89] found id: "2629b02a9c3dc6762ccb0b6979179bee48869c9043cf2d708a2c66592fb3c3b5"
	I0328 22:12:23.842163 1343785 cri.go:89] found id: ""
	I0328 22:12:23.842172 1343785 logs.go:276] 1 containers: [2629b02a9c3dc6762ccb0b6979179bee48869c9043cf2d708a2c66592fb3c3b5]
	I0328 22:12:23.842233 1343785 ssh_runner.go:195] Run: which crictl
	I0328 22:12:23.846713 1343785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 22:12:23.846791 1343785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 22:12:23.884150 1343785 cri.go:89] found id: "1964cd4dcb928edc5fb3ff4cd4a8a1b6afca96dbe666d2a6c6308a962a04fe15"
	I0328 22:12:23.884179 1343785 cri.go:89] found id: ""
	I0328 22:12:23.884187 1343785 logs.go:276] 1 containers: [1964cd4dcb928edc5fb3ff4cd4a8a1b6afca96dbe666d2a6c6308a962a04fe15]
	I0328 22:12:23.884243 1343785 ssh_runner.go:195] Run: which crictl
	I0328 22:12:23.887848 1343785 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0328 22:12:23.887919 1343785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0328 22:12:23.931035 1343785 cri.go:89] found id: "4638f483f335cf1696691ad4ba3f3dc515c5e7c24fb50964ab418406966667e2"
	I0328 22:12:23.931058 1343785 cri.go:89] found id: "3bdeb47065970683bd1984cb42566138e7d30672fa0b08df7df8377ceb1c962a"
	I0328 22:12:23.931063 1343785 cri.go:89] found id: ""
	I0328 22:12:23.931070 1343785 logs.go:276] 2 containers: [4638f483f335cf1696691ad4ba3f3dc515c5e7c24fb50964ab418406966667e2 3bdeb47065970683bd1984cb42566138e7d30672fa0b08df7df8377ceb1c962a]
	I0328 22:12:23.931132 1343785 ssh_runner.go:195] Run: which crictl
	I0328 22:12:23.934849 1343785 ssh_runner.go:195] Run: which crictl
	I0328 22:12:23.938226 1343785 logs.go:123] Gathering logs for kube-apiserver [2802d01750454277c58463803cc59154ce9ed35544da7d3b40387123a0807b37] ...
	I0328 22:12:23.938253 1343785 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2802d01750454277c58463803cc59154ce9ed35544da7d3b40387123a0807b37"
	I0328 22:12:23.988318 1343785 logs.go:123] Gathering logs for coredns [ad1c78378fd080073e6d7b4adcb4ebcd3512cdf786702dc1c9708c29fd3c43b7] ...
	I0328 22:12:23.988350 1343785 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ad1c78378fd080073e6d7b4adcb4ebcd3512cdf786702dc1c9708c29fd3c43b7"
	I0328 22:12:24.036815 1343785 logs.go:123] Gathering logs for kube-scheduler [c0438a8c8a7c6ee24b7206071fa6b63dd553fbf14f4fd558528c57917097cde5] ...
	I0328 22:12:24.036844 1343785 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0438a8c8a7c6ee24b7206071fa6b63dd553fbf14f4fd558528c57917097cde5"
	I0328 22:12:24.079329 1343785 logs.go:123] Gathering logs for kube-proxy [02c2f0a0149bac38e3cc7ba146663278ece69b0c2c72593414ea1aa8114deb43] ...
	I0328 22:12:24.079360 1343785 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02c2f0a0149bac38e3cc7ba146663278ece69b0c2c72593414ea1aa8114deb43"
	I0328 22:12:24.122361 1343785 logs.go:123] Gathering logs for etcd [77a89f4b79b5e13c02662c8ba6f6f3a922d205bf39b5c63ab0720b597e5ad2f4] ...
	I0328 22:12:24.122394 1343785 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77a89f4b79b5e13c02662c8ba6f6f3a922d205bf39b5c63ab0720b597e5ad2f4"
	I0328 22:12:24.174268 1343785 logs.go:123] Gathering logs for kube-controller-manager [b4dfc8a81959e1e54f5ccd7dbce5e5b79e2e50ea3eb9c6ef60ed4cc3619a0208] ...
	I0328 22:12:24.174301 1343785 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4dfc8a81959e1e54f5ccd7dbce5e5b79e2e50ea3eb9c6ef60ed4cc3619a0208"
	I0328 22:12:24.234523 1343785 logs.go:123] Gathering logs for storage-provisioner [3bdeb47065970683bd1984cb42566138e7d30672fa0b08df7df8377ceb1c962a] ...
	I0328 22:12:24.234559 1343785 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bdeb47065970683bd1984cb42566138e7d30672fa0b08df7df8377ceb1c962a"
	I0328 22:12:24.286185 1343785 logs.go:123] Gathering logs for CRI-O ...
	I0328 22:12:24.286238 1343785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 22:12:24.369922 1343785 logs.go:123] Gathering logs for container status ...
	I0328 22:12:24.369997 1343785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 22:12:24.414293 1343785 logs.go:123] Gathering logs for kubelet ...
	I0328 22:12:24.414320 1343785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0328 22:12:24.444721 1343785 logs.go:138] Found kubelet problem: Mar 28 22:08:17 no-preload-363849 kubelet[750]: W0328 22:08:17.200950     750 reflector.go:547] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-363849" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-363849' and this object
	W0328 22:12:24.444981 1343785 logs.go:138] Found kubelet problem: Mar 28 22:08:17 no-preload-363849 kubelet[750]: E0328 22:08:17.201000     750 reflector.go:150] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-363849" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-363849' and this object
	I0328 22:12:24.490788 1343785 logs.go:123] Gathering logs for dmesg ...
	I0328 22:12:24.490819 1343785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 22:12:24.514484 1343785 logs.go:123] Gathering logs for describe nodes ...
	I0328 22:12:24.514511 1343785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 22:12:24.641992 1343785 logs.go:123] Gathering logs for kindnet [2629b02a9c3dc6762ccb0b6979179bee48869c9043cf2d708a2c66592fb3c3b5] ...
	I0328 22:12:24.642027 1343785 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2629b02a9c3dc6762ccb0b6979179bee48869c9043cf2d708a2c66592fb3c3b5"
	I0328 22:12:24.683454 1343785 logs.go:123] Gathering logs for kubernetes-dashboard [1964cd4dcb928edc5fb3ff4cd4a8a1b6afca96dbe666d2a6c6308a962a04fe15] ...
	I0328 22:12:24.683561 1343785 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1964cd4dcb928edc5fb3ff4cd4a8a1b6afca96dbe666d2a6c6308a962a04fe15"
	I0328 22:12:24.724286 1343785 logs.go:123] Gathering logs for storage-provisioner [4638f483f335cf1696691ad4ba3f3dc515c5e7c24fb50964ab418406966667e2] ...
	I0328 22:12:24.724314 1343785 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4638f483f335cf1696691ad4ba3f3dc515c5e7c24fb50964ab418406966667e2"
	I0328 22:12:24.760795 1343785 out.go:304] Setting ErrFile to fd 2...
	I0328 22:12:24.760818 1343785 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0328 22:12:24.760888 1343785 out.go:239] X Problems detected in kubelet:
	W0328 22:12:24.760900 1343785 out.go:239]   Mar 28 22:08:17 no-preload-363849 kubelet[750]: W0328 22:08:17.200950     750 reflector.go:547] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-363849" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-363849' and this object
	W0328 22:12:24.760907 1343785 out.go:239]   Mar 28 22:08:17 no-preload-363849 kubelet[750]: E0328 22:08:17.201000     750 reflector.go:150] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-363849" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-363849' and this object
	I0328 22:12:24.760931 1343785 out.go:304] Setting ErrFile to fd 2...
	I0328 22:12:24.760943 1343785 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 22:12:29.295105 1338826 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 22:12:29.306579 1338826 api_server.go:72] duration metric: took 5m59.438575826s to wait for apiserver process to appear ...
	I0328 22:12:29.306606 1338826 api_server.go:88] waiting for apiserver healthz status ...
	I0328 22:12:29.306641 1338826 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 22:12:29.306705 1338826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 22:12:29.345775 1338826 cri.go:89] found id: "aafa2b2860b7d02f5f63f8f79e578efc4b8a612e8845f3c01b98f89c881a05f5"
	I0328 22:12:29.345797 1338826 cri.go:89] found id: ""
	I0328 22:12:29.345805 1338826 logs.go:276] 1 containers: [aafa2b2860b7d02f5f63f8f79e578efc4b8a612e8845f3c01b98f89c881a05f5]
	I0328 22:12:29.345862 1338826 ssh_runner.go:195] Run: which crictl
	I0328 22:12:29.349735 1338826 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 22:12:29.349812 1338826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 22:12:29.389469 1338826 cri.go:89] found id: "54454ca60825d1d8afde9d3954763066222f962cd282dd4ae5776677f1d02263"
	I0328 22:12:29.389500 1338826 cri.go:89] found id: ""
	I0328 22:12:29.389510 1338826 logs.go:276] 1 containers: [54454ca60825d1d8afde9d3954763066222f962cd282dd4ae5776677f1d02263]
	I0328 22:12:29.389587 1338826 ssh_runner.go:195] Run: which crictl
	I0328 22:12:29.393252 1338826 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 22:12:29.393335 1338826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 22:12:29.435046 1338826 cri.go:89] found id: "9a5cc20f063ad7f3789223916297cb6dbac2af4ce58e82d5bde29f4655036962"
	I0328 22:12:29.435068 1338826 cri.go:89] found id: ""
	I0328 22:12:29.435076 1338826 logs.go:276] 1 containers: [9a5cc20f063ad7f3789223916297cb6dbac2af4ce58e82d5bde29f4655036962]
	I0328 22:12:29.435134 1338826 ssh_runner.go:195] Run: which crictl
	I0328 22:12:29.439053 1338826 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 22:12:29.439135 1338826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 22:12:29.478063 1338826 cri.go:89] found id: "0c1c9dee5f4ce24a9e91181634895b593f2aec0d21a11adbe16ddd1adce82d0b"
	I0328 22:12:29.478084 1338826 cri.go:89] found id: ""
	I0328 22:12:29.478092 1338826 logs.go:276] 1 containers: [0c1c9dee5f4ce24a9e91181634895b593f2aec0d21a11adbe16ddd1adce82d0b]
	I0328 22:12:29.478148 1338826 ssh_runner.go:195] Run: which crictl
	I0328 22:12:29.481825 1338826 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 22:12:29.481896 1338826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 22:12:29.524721 1338826 cri.go:89] found id: "99963dd8c6223acf3495ce9e9c35c7f4c45a97d49ba1435654dd656d82537a95"
	I0328 22:12:29.524784 1338826 cri.go:89] found id: ""
	I0328 22:12:29.524806 1338826 logs.go:276] 1 containers: [99963dd8c6223acf3495ce9e9c35c7f4c45a97d49ba1435654dd656d82537a95]
	I0328 22:12:29.524879 1338826 ssh_runner.go:195] Run: which crictl
	I0328 22:12:29.528593 1338826 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 22:12:29.528672 1338826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 22:12:29.570444 1338826 cri.go:89] found id: "accf5dfc23ea01d34e6a9b748711dcadd8f2af898f72bbc4348b539768308591"
	I0328 22:12:29.570465 1338826 cri.go:89] found id: ""
	I0328 22:12:29.570472 1338826 logs.go:276] 1 containers: [accf5dfc23ea01d34e6a9b748711dcadd8f2af898f72bbc4348b539768308591]
	I0328 22:12:29.570548 1338826 ssh_runner.go:195] Run: which crictl
	I0328 22:12:29.574273 1338826 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 22:12:29.574347 1338826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 22:12:29.615918 1338826 cri.go:89] found id: "f410c25189f5f56842f0d6f5e959b565b53ee6a8d5258db914882d1e38e9ab2b"
	I0328 22:12:29.615941 1338826 cri.go:89] found id: ""
	I0328 22:12:29.615948 1338826 logs.go:276] 1 containers: [f410c25189f5f56842f0d6f5e959b565b53ee6a8d5258db914882d1e38e9ab2b]
	I0328 22:12:29.616002 1338826 ssh_runner.go:195] Run: which crictl
	I0328 22:12:29.619941 1338826 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 22:12:29.620011 1338826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 22:12:29.656396 1338826 cri.go:89] found id: "dcb78e1b7465099ed99741a848314cba3d027ecfab8e44b022dc85231ab9a26d"
	I0328 22:12:29.656421 1338826 cri.go:89] found id: ""
	I0328 22:12:29.656429 1338826 logs.go:276] 1 containers: [dcb78e1b7465099ed99741a848314cba3d027ecfab8e44b022dc85231ab9a26d]
	I0328 22:12:29.656484 1338826 ssh_runner.go:195] Run: which crictl
	I0328 22:12:29.660191 1338826 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0328 22:12:29.660260 1338826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0328 22:12:29.697904 1338826 cri.go:89] found id: "dbebe9c9214b8446dd7e1ba58af2dcdda839029f75a4df5d16fce02680f0f723"
	I0328 22:12:29.697933 1338826 cri.go:89] found id: "de9034878b9e7c8d2e40e141b1c21f2a76586667af994d8465e9601d126955e0"
	I0328 22:12:29.697938 1338826 cri.go:89] found id: ""
	I0328 22:12:29.697945 1338826 logs.go:276] 2 containers: [dbebe9c9214b8446dd7e1ba58af2dcdda839029f75a4df5d16fce02680f0f723 de9034878b9e7c8d2e40e141b1c21f2a76586667af994d8465e9601d126955e0]
	I0328 22:12:29.698002 1338826 ssh_runner.go:195] Run: which crictl
	I0328 22:12:29.701729 1338826 ssh_runner.go:195] Run: which crictl
	I0328 22:12:29.705384 1338826 logs.go:123] Gathering logs for kube-proxy [99963dd8c6223acf3495ce9e9c35c7f4c45a97d49ba1435654dd656d82537a95] ...
	I0328 22:12:29.705454 1338826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 99963dd8c6223acf3495ce9e9c35c7f4c45a97d49ba1435654dd656d82537a95"
	I0328 22:12:29.749991 1338826 logs.go:123] Gathering logs for storage-provisioner [dbebe9c9214b8446dd7e1ba58af2dcdda839029f75a4df5d16fce02680f0f723] ...
	I0328 22:12:29.750020 1338826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbebe9c9214b8446dd7e1ba58af2dcdda839029f75a4df5d16fce02680f0f723"
	I0328 22:12:29.795978 1338826 logs.go:123] Gathering logs for storage-provisioner [de9034878b9e7c8d2e40e141b1c21f2a76586667af994d8465e9601d126955e0] ...
	I0328 22:12:29.796009 1338826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de9034878b9e7c8d2e40e141b1c21f2a76586667af994d8465e9601d126955e0"
	I0328 22:12:29.859914 1338826 logs.go:123] Gathering logs for dmesg ...
	I0328 22:12:29.859942 1338826 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 22:12:29.880830 1338826 logs.go:123] Gathering logs for etcd [54454ca60825d1d8afde9d3954763066222f962cd282dd4ae5776677f1d02263] ...
	I0328 22:12:29.880857 1338826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54454ca60825d1d8afde9d3954763066222f962cd282dd4ae5776677f1d02263"
	I0328 22:12:29.935958 1338826 logs.go:123] Gathering logs for kubernetes-dashboard [dcb78e1b7465099ed99741a848314cba3d027ecfab8e44b022dc85231ab9a26d] ...
	I0328 22:12:29.935992 1338826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dcb78e1b7465099ed99741a848314cba3d027ecfab8e44b022dc85231ab9a26d"
	I0328 22:12:29.981822 1338826 logs.go:123] Gathering logs for describe nodes ...
	I0328 22:12:29.981850 1338826 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 22:12:30.152830 1338826 logs.go:123] Gathering logs for kindnet [f410c25189f5f56842f0d6f5e959b565b53ee6a8d5258db914882d1e38e9ab2b] ...
	I0328 22:12:30.152868 1338826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f410c25189f5f56842f0d6f5e959b565b53ee6a8d5258db914882d1e38e9ab2b"
	I0328 22:12:30.199134 1338826 logs.go:123] Gathering logs for CRI-O ...
	I0328 22:12:30.199177 1338826 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 22:12:30.283626 1338826 logs.go:123] Gathering logs for container status ...
	I0328 22:12:30.283664 1338826 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 22:12:30.353160 1338826 logs.go:123] Gathering logs for kube-scheduler [0c1c9dee5f4ce24a9e91181634895b593f2aec0d21a11adbe16ddd1adce82d0b] ...
	I0328 22:12:30.353193 1338826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c1c9dee5f4ce24a9e91181634895b593f2aec0d21a11adbe16ddd1adce82d0b"
	I0328 22:12:30.405400 1338826 logs.go:123] Gathering logs for kube-controller-manager [accf5dfc23ea01d34e6a9b748711dcadd8f2af898f72bbc4348b539768308591] ...
	I0328 22:12:30.405432 1338826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 accf5dfc23ea01d34e6a9b748711dcadd8f2af898f72bbc4348b539768308591"
	I0328 22:12:30.506110 1338826 logs.go:123] Gathering logs for coredns [9a5cc20f063ad7f3789223916297cb6dbac2af4ce58e82d5bde29f4655036962] ...
	I0328 22:12:30.506147 1338826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a5cc20f063ad7f3789223916297cb6dbac2af4ce58e82d5bde29f4655036962"
	I0328 22:12:30.564490 1338826 logs.go:123] Gathering logs for kubelet ...
	I0328 22:12:30.564518 1338826 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0328 22:12:30.618150 1338826 logs.go:138] Found kubelet problem: Mar 28 22:06:47 old-k8s-version-633693 kubelet[742]: E0328 22:06:47.826315     742 reflector.go:138] object-"kube-system"/"storage-provisioner-token-htmqq": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-htmqq" is forbidden: User "system:node:old-k8s-version-633693" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-633693' and this object
	W0328 22:12:30.618396 1338826 logs.go:138] Found kubelet problem: Mar 28 22:06:47 old-k8s-version-633693 kubelet[742]: E0328 22:06:47.826559     742 reflector.go:138] object-"default"/"default-token-skqvg": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-skqvg" is forbidden: User "system:node:old-k8s-version-633693" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-633693' and this object
	W0328 22:12:30.618624 1338826 logs.go:138] Found kubelet problem: Mar 28 22:06:47 old-k8s-version-633693 kubelet[742]: E0328 22:06:47.826733     742 reflector.go:138] object-"kube-system"/"metrics-server-token-qkmwr": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-qkmwr" is forbidden: User "system:node:old-k8s-version-633693" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-633693' and this object
	W0328 22:12:30.618835 1338826 logs.go:138] Found kubelet problem: Mar 28 22:06:47 old-k8s-version-633693 kubelet[742]: E0328 22:06:47.829336     742 reflector.go:138] object-"kube-system"/"coredns-token-zjvsj": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-zjvsj" is forbidden: User "system:node:old-k8s-version-633693" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-633693' and this object
	W0328 22:12:30.619044 1338826 logs.go:138] Found kubelet problem: Mar 28 22:06:47 old-k8s-version-633693 kubelet[742]: E0328 22:06:47.829576     742 reflector.go:138] object-"kube-system"/"kindnet-token-g4wkb": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-g4wkb" is forbidden: User "system:node:old-k8s-version-633693" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-633693' and this object
	W0328 22:12:30.619249 1338826 logs.go:138] Found kubelet problem: Mar 28 22:06:47 old-k8s-version-633693 kubelet[742]: E0328 22:06:47.829784     742 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-633693" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-633693' and this object
	W0328 22:12:30.619463 1338826 logs.go:138] Found kubelet problem: Mar 28 22:06:47 old-k8s-version-633693 kubelet[742]: E0328 22:06:47.829969     742 reflector.go:138] object-"kube-system"/"kube-proxy-token-l4tct": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-l4tct" is forbidden: User "system:node:old-k8s-version-633693" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-633693' and this object
	W0328 22:12:30.619666 1338826 logs.go:138] Found kubelet problem: Mar 28 22:06:47 old-k8s-version-633693 kubelet[742]: E0328 22:06:47.830130     742 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-633693" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-633693' and this object
	W0328 22:12:30.628872 1338826 logs.go:138] Found kubelet problem: Mar 28 22:06:50 old-k8s-version-633693 kubelet[742]: E0328 22:06:50.527663     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0328 22:12:30.629070 1338826 logs.go:138] Found kubelet problem: Mar 28 22:06:50 old-k8s-version-633693 kubelet[742]: E0328 22:06:50.814044     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:30.631140 1338826 logs.go:138] Found kubelet problem: Mar 28 22:07:04 old-k8s-version-633693 kubelet[742]: E0328 22:07:04.699349     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0328 22:12:30.631470 1338826 logs.go:138] Found kubelet problem: Mar 28 22:07:07 old-k8s-version-633693 kubelet[742]: E0328 22:07:07.707146     742 reflector.go:138] object-"kubernetes-dashboard"/"kubernetes-dashboard-token-kddb6": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kubernetes-dashboard-token-kddb6" is forbidden: User "system:node:old-k8s-version-633693" cannot list resource "secrets" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'old-k8s-version-633693' and this object
	W0328 22:12:30.634771 1338826 logs.go:138] Found kubelet problem: Mar 28 22:07:15 old-k8s-version-633693 kubelet[742]: E0328 22:07:15.934579     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:30.635235 1338826 logs.go:138] Found kubelet problem: Mar 28 22:07:16 old-k8s-version-633693 kubelet[742]: E0328 22:07:16.937591     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:30.635423 1338826 logs.go:138] Found kubelet problem: Mar 28 22:07:19 old-k8s-version-633693 kubelet[742]: E0328 22:07:19.733048     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:30.635928 1338826 logs.go:138] Found kubelet problem: Mar 28 22:07:22 old-k8s-version-633693 kubelet[742]: E0328 22:07:22.808225     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:30.638464 1338826 logs.go:138] Found kubelet problem: Mar 28 22:07:34 old-k8s-version-633693 kubelet[742]: E0328 22:07:34.698485     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0328 22:12:30.639072 1338826 logs.go:138] Found kubelet problem: Mar 28 22:07:36 old-k8s-version-633693 kubelet[742]: E0328 22:07:36.977259     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:30.639400 1338826 logs.go:138] Found kubelet problem: Mar 28 22:07:42 old-k8s-version-633693 kubelet[742]: E0328 22:07:42.808747     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:30.639586 1338826 logs.go:138] Found kubelet problem: Mar 28 22:07:48 old-k8s-version-633693 kubelet[742]: E0328 22:07:48.688107     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:30.639926 1338826 logs.go:138] Found kubelet problem: Mar 28 22:07:54 old-k8s-version-633693 kubelet[742]: E0328 22:07:54.687538     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:30.640169 1338826 logs.go:138] Found kubelet problem: Mar 28 22:08:02 old-k8s-version-633693 kubelet[742]: E0328 22:08:02.688287     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:30.640768 1338826 logs.go:138] Found kubelet problem: Mar 28 22:08:09 old-k8s-version-633693 kubelet[742]: E0328 22:08:09.025390     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:30.641095 1338826 logs.go:138] Found kubelet problem: Mar 28 22:08:12 old-k8s-version-633693 kubelet[742]: E0328 22:08:12.808271     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:30.641278 1338826 logs.go:138] Found kubelet problem: Mar 28 22:08:14 old-k8s-version-633693 kubelet[742]: E0328 22:08:14.687911     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:30.643355 1338826 logs.go:138] Found kubelet problem: Mar 28 22:08:25 old-k8s-version-633693 kubelet[742]: E0328 22:08:25.704436     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0328 22:12:30.643711 1338826 logs.go:138] Found kubelet problem: Mar 28 22:08:27 old-k8s-version-633693 kubelet[742]: E0328 22:08:27.687797     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:30.643899 1338826 logs.go:138] Found kubelet problem: Mar 28 22:08:39 old-k8s-version-633693 kubelet[742]: E0328 22:08:39.688581     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:30.644248 1338826 logs.go:138] Found kubelet problem: Mar 28 22:08:41 old-k8s-version-633693 kubelet[742]: E0328 22:08:41.687550     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:30.645266 1338826 logs.go:138] Found kubelet problem: Mar 28 22:08:51 old-k8s-version-633693 kubelet[742]: E0328 22:08:51.689572     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:30.645867 1338826 logs.go:138] Found kubelet problem: Mar 28 22:08:55 old-k8s-version-633693 kubelet[742]: E0328 22:08:55.094326     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:30.646051 1338826 logs.go:138] Found kubelet problem: Mar 28 22:09:02 old-k8s-version-633693 kubelet[742]: E0328 22:09:02.687902     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:30.646397 1338826 logs.go:138] Found kubelet problem: Mar 28 22:09:02 old-k8s-version-633693 kubelet[742]: E0328 22:09:02.808415     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:30.646732 1338826 logs.go:138] Found kubelet problem: Mar 28 22:09:13 old-k8s-version-633693 kubelet[742]: E0328 22:09:13.689276     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:30.646918 1338826 logs.go:138] Found kubelet problem: Mar 28 22:09:14 old-k8s-version-633693 kubelet[742]: E0328 22:09:14.688165     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:30.647245 1338826 logs.go:138] Found kubelet problem: Mar 28 22:09:24 old-k8s-version-633693 kubelet[742]: E0328 22:09:24.687486     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:30.647451 1338826 logs.go:138] Found kubelet problem: Mar 28 22:09:29 old-k8s-version-633693 kubelet[742]: E0328 22:09:29.688477     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:30.647780 1338826 logs.go:138] Found kubelet problem: Mar 28 22:09:35 old-k8s-version-633693 kubelet[742]: E0328 22:09:35.687479     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:30.648521 1338826 logs.go:138] Found kubelet problem: Mar 28 22:09:40 old-k8s-version-633693 kubelet[742]: E0328 22:09:40.688238     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:30.648899 1338826 logs.go:138] Found kubelet problem: Mar 28 22:09:48 old-k8s-version-633693 kubelet[742]: E0328 22:09:48.687433     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:30.651003 1338826 logs.go:138] Found kubelet problem: Mar 28 22:09:51 old-k8s-version-633693 kubelet[742]: E0328 22:09:51.697009     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0328 22:12:30.651336 1338826 logs.go:138] Found kubelet problem: Mar 28 22:10:02 old-k8s-version-633693 kubelet[742]: E0328 22:10:02.687437     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:30.651523 1338826 logs.go:138] Found kubelet problem: Mar 28 22:10:03 old-k8s-version-633693 kubelet[742]: E0328 22:10:03.688247     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:30.652119 1338826 logs.go:138] Found kubelet problem: Mar 28 22:10:16 old-k8s-version-633693 kubelet[742]: E0328 22:10:16.214323     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:30.652326 1338826 logs.go:138] Found kubelet problem: Mar 28 22:10:16 old-k8s-version-633693 kubelet[742]: E0328 22:10:16.688145     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:30.652674 1338826 logs.go:138] Found kubelet problem: Mar 28 22:10:22 old-k8s-version-633693 kubelet[742]: E0328 22:10:22.808176     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:30.652864 1338826 logs.go:138] Found kubelet problem: Mar 28 22:10:29 old-k8s-version-633693 kubelet[742]: E0328 22:10:29.688127     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:30.653193 1338826 logs.go:138] Found kubelet problem: Mar 28 22:10:33 old-k8s-version-633693 kubelet[742]: E0328 22:10:33.687521     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:30.653377 1338826 logs.go:138] Found kubelet problem: Mar 28 22:10:40 old-k8s-version-633693 kubelet[742]: E0328 22:10:40.688295     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:30.653719 1338826 logs.go:138] Found kubelet problem: Mar 28 22:10:46 old-k8s-version-633693 kubelet[742]: E0328 22:10:46.687410     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:30.653906 1338826 logs.go:138] Found kubelet problem: Mar 28 22:10:52 old-k8s-version-633693 kubelet[742]: E0328 22:10:52.688204     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:30.654232 1338826 logs.go:138] Found kubelet problem: Mar 28 22:10:57 old-k8s-version-633693 kubelet[742]: E0328 22:10:57.687778     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:30.654416 1338826 logs.go:138] Found kubelet problem: Mar 28 22:11:07 old-k8s-version-633693 kubelet[742]: E0328 22:11:07.688569     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:30.654742 1338826 logs.go:138] Found kubelet problem: Mar 28 22:11:10 old-k8s-version-633693 kubelet[742]: E0328 22:11:10.687505     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:30.654925 1338826 logs.go:138] Found kubelet problem: Mar 28 22:11:21 old-k8s-version-633693 kubelet[742]: E0328 22:11:21.688528     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:30.655250 1338826 logs.go:138] Found kubelet problem: Mar 28 22:11:25 old-k8s-version-633693 kubelet[742]: E0328 22:11:25.687833     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:30.655433 1338826 logs.go:138] Found kubelet problem: Mar 28 22:11:36 old-k8s-version-633693 kubelet[742]: E0328 22:11:36.687961     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:30.656006 1338826 logs.go:138] Found kubelet problem: Mar 28 22:11:39 old-k8s-version-633693 kubelet[742]: E0328 22:11:39.687565     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:30.656212 1338826 logs.go:138] Found kubelet problem: Mar 28 22:11:51 old-k8s-version-633693 kubelet[742]: E0328 22:11:51.689147     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:30.656539 1338826 logs.go:138] Found kubelet problem: Mar 28 22:11:54 old-k8s-version-633693 kubelet[742]: E0328 22:11:54.687465     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:30.656739 1338826 logs.go:138] Found kubelet problem: Mar 28 22:12:02 old-k8s-version-633693 kubelet[742]: E0328 22:12:02.688202     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:30.657073 1338826 logs.go:138] Found kubelet problem: Mar 28 22:12:06 old-k8s-version-633693 kubelet[742]: E0328 22:12:06.687449     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:30.657397 1338826 logs.go:138] Found kubelet problem: Mar 28 22:12:17 old-k8s-version-633693 kubelet[742]: E0328 22:12:17.687727     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:30.657583 1338826 logs.go:138] Found kubelet problem: Mar 28 22:12:17 old-k8s-version-633693 kubelet[742]: E0328 22:12:17.689382     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0328 22:12:30.657593 1338826 logs.go:123] Gathering logs for kube-apiserver [aafa2b2860b7d02f5f63f8f79e578efc4b8a612e8845f3c01b98f89c881a05f5] ...
	I0328 22:12:30.657607 1338826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aafa2b2860b7d02f5f63f8f79e578efc4b8a612e8845f3c01b98f89c881a05f5"
	I0328 22:12:30.726909 1338826 out.go:304] Setting ErrFile to fd 2...
	I0328 22:12:30.726942 1338826 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0328 22:12:30.727001 1338826 out.go:239] X Problems detected in kubelet:
	W0328 22:12:30.727013 1338826 out.go:239]   Mar 28 22:11:54 old-k8s-version-633693 kubelet[742]: E0328 22:11:54.687465     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:30.727021 1338826 out.go:239]   Mar 28 22:12:02 old-k8s-version-633693 kubelet[742]: E0328 22:12:02.688202     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0328 22:12:30.727028 1338826 out.go:239]   Mar 28 22:12:06 old-k8s-version-633693 kubelet[742]: E0328 22:12:06.687449     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:30.727039 1338826 out.go:239]   Mar 28 22:12:17 old-k8s-version-633693 kubelet[742]: E0328 22:12:17.687727     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	W0328 22:12:30.727048 1338826 out.go:239]   Mar 28 22:12:17 old-k8s-version-633693 kubelet[742]: E0328 22:12:17.689382     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0328 22:12:30.727062 1338826 out.go:304] Setting ErrFile to fd 2...
	I0328 22:12:30.727068 1338826 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 22:12:34.761967 1343785 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0328 22:12:34.771071 1343785 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0328 22:12:34.772106 1343785 api_server.go:141] control plane version: v1.30.0-beta.0
	I0328 22:12:34.772131 1343785 api_server.go:131] duration metric: took 11.236856775s to wait for apiserver health ...
	I0328 22:12:34.772140 1343785 system_pods.go:43] waiting for kube-system pods to appear ...
	I0328 22:12:34.772169 1343785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0328 22:12:34.772232 1343785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0328 22:12:34.808896 1343785 cri.go:89] found id: "2802d01750454277c58463803cc59154ce9ed35544da7d3b40387123a0807b37"
	I0328 22:12:34.808918 1343785 cri.go:89] found id: ""
	I0328 22:12:34.808926 1343785 logs.go:276] 1 containers: [2802d01750454277c58463803cc59154ce9ed35544da7d3b40387123a0807b37]
	I0328 22:12:34.808982 1343785 ssh_runner.go:195] Run: which crictl
	I0328 22:12:34.814246 1343785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0328 22:12:34.814323 1343785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0328 22:12:34.862726 1343785 cri.go:89] found id: "77a89f4b79b5e13c02662c8ba6f6f3a922d205bf39b5c63ab0720b597e5ad2f4"
	I0328 22:12:34.862749 1343785 cri.go:89] found id: ""
	I0328 22:12:34.862756 1343785 logs.go:276] 1 containers: [77a89f4b79b5e13c02662c8ba6f6f3a922d205bf39b5c63ab0720b597e5ad2f4]
	I0328 22:12:34.862813 1343785 ssh_runner.go:195] Run: which crictl
	I0328 22:12:34.866259 1343785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0328 22:12:34.866334 1343785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0328 22:12:34.918324 1343785 cri.go:89] found id: "ad1c78378fd080073e6d7b4adcb4ebcd3512cdf786702dc1c9708c29fd3c43b7"
	I0328 22:12:34.918348 1343785 cri.go:89] found id: ""
	I0328 22:12:34.918357 1343785 logs.go:276] 1 containers: [ad1c78378fd080073e6d7b4adcb4ebcd3512cdf786702dc1c9708c29fd3c43b7]
	I0328 22:12:34.918414 1343785 ssh_runner.go:195] Run: which crictl
	I0328 22:12:34.922217 1343785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0328 22:12:34.922301 1343785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0328 22:12:34.963463 1343785 cri.go:89] found id: "c0438a8c8a7c6ee24b7206071fa6b63dd553fbf14f4fd558528c57917097cde5"
	I0328 22:12:34.963485 1343785 cri.go:89] found id: ""
	I0328 22:12:34.963492 1343785 logs.go:276] 1 containers: [c0438a8c8a7c6ee24b7206071fa6b63dd553fbf14f4fd558528c57917097cde5]
	I0328 22:12:34.963554 1343785 ssh_runner.go:195] Run: which crictl
	I0328 22:12:34.967897 1343785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0328 22:12:34.967976 1343785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0328 22:12:35.008964 1343785 cri.go:89] found id: "02c2f0a0149bac38e3cc7ba146663278ece69b0c2c72593414ea1aa8114deb43"
	I0328 22:12:35.008989 1343785 cri.go:89] found id: ""
	I0328 22:12:35.008998 1343785 logs.go:276] 1 containers: [02c2f0a0149bac38e3cc7ba146663278ece69b0c2c72593414ea1aa8114deb43]
	I0328 22:12:35.009072 1343785 ssh_runner.go:195] Run: which crictl
	I0328 22:12:35.014679 1343785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0328 22:12:35.014762 1343785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0328 22:12:35.055252 1343785 cri.go:89] found id: "b4dfc8a81959e1e54f5ccd7dbce5e5b79e2e50ea3eb9c6ef60ed4cc3619a0208"
	I0328 22:12:35.055276 1343785 cri.go:89] found id: ""
	I0328 22:12:35.055285 1343785 logs.go:276] 1 containers: [b4dfc8a81959e1e54f5ccd7dbce5e5b79e2e50ea3eb9c6ef60ed4cc3619a0208]
	I0328 22:12:35.055344 1343785 ssh_runner.go:195] Run: which crictl
	I0328 22:12:35.058889 1343785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0328 22:12:35.058971 1343785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0328 22:12:35.101541 1343785 cri.go:89] found id: "2629b02a9c3dc6762ccb0b6979179bee48869c9043cf2d708a2c66592fb3c3b5"
	I0328 22:12:35.101566 1343785 cri.go:89] found id: ""
	I0328 22:12:35.101574 1343785 logs.go:276] 1 containers: [2629b02a9c3dc6762ccb0b6979179bee48869c9043cf2d708a2c66592fb3c3b5]
	I0328 22:12:35.101632 1343785 ssh_runner.go:195] Run: which crictl
	I0328 22:12:35.105796 1343785 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0328 22:12:35.105877 1343785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0328 22:12:35.150310 1343785 cri.go:89] found id: "4638f483f335cf1696691ad4ba3f3dc515c5e7c24fb50964ab418406966667e2"
	I0328 22:12:35.150335 1343785 cri.go:89] found id: "3bdeb47065970683bd1984cb42566138e7d30672fa0b08df7df8377ceb1c962a"
	I0328 22:12:35.150340 1343785 cri.go:89] found id: ""
	I0328 22:12:35.150348 1343785 logs.go:276] 2 containers: [4638f483f335cf1696691ad4ba3f3dc515c5e7c24fb50964ab418406966667e2 3bdeb47065970683bd1984cb42566138e7d30672fa0b08df7df8377ceb1c962a]
	I0328 22:12:35.150436 1343785 ssh_runner.go:195] Run: which crictl
	I0328 22:12:35.154508 1343785 ssh_runner.go:195] Run: which crictl
	I0328 22:12:35.158266 1343785 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0328 22:12:35.158340 1343785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0328 22:12:35.205344 1343785 cri.go:89] found id: "1964cd4dcb928edc5fb3ff4cd4a8a1b6afca96dbe666d2a6c6308a962a04fe15"
	I0328 22:12:35.205369 1343785 cri.go:89] found id: ""
	I0328 22:12:35.205376 1343785 logs.go:276] 1 containers: [1964cd4dcb928edc5fb3ff4cd4a8a1b6afca96dbe666d2a6c6308a962a04fe15]
	I0328 22:12:35.205458 1343785 ssh_runner.go:195] Run: which crictl
	I0328 22:12:35.209503 1343785 logs.go:123] Gathering logs for dmesg ...
	I0328 22:12:35.209576 1343785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 22:12:35.228942 1343785 logs.go:123] Gathering logs for kube-scheduler [c0438a8c8a7c6ee24b7206071fa6b63dd553fbf14f4fd558528c57917097cde5] ...
	I0328 22:12:35.228974 1343785 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0438a8c8a7c6ee24b7206071fa6b63dd553fbf14f4fd558528c57917097cde5"
	I0328 22:12:35.272370 1343785 logs.go:123] Gathering logs for storage-provisioner [3bdeb47065970683bd1984cb42566138e7d30672fa0b08df7df8377ceb1c962a] ...
	I0328 22:12:35.272403 1343785 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bdeb47065970683bd1984cb42566138e7d30672fa0b08df7df8377ceb1c962a"
	I0328 22:12:35.311726 1343785 logs.go:123] Gathering logs for describe nodes ...
	I0328 22:12:35.311799 1343785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 22:12:35.440684 1343785 logs.go:123] Gathering logs for etcd [77a89f4b79b5e13c02662c8ba6f6f3a922d205bf39b5c63ab0720b597e5ad2f4] ...
	I0328 22:12:35.440717 1343785 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77a89f4b79b5e13c02662c8ba6f6f3a922d205bf39b5c63ab0720b597e5ad2f4"
	I0328 22:12:35.489277 1343785 logs.go:123] Gathering logs for kubernetes-dashboard [1964cd4dcb928edc5fb3ff4cd4a8a1b6afca96dbe666d2a6c6308a962a04fe15] ...
	I0328 22:12:35.489307 1343785 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1964cd4dcb928edc5fb3ff4cd4a8a1b6afca96dbe666d2a6c6308a962a04fe15"
	I0328 22:12:35.535305 1343785 logs.go:123] Gathering logs for CRI-O ...
	I0328 22:12:35.535381 1343785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0328 22:12:35.613556 1343785 logs.go:123] Gathering logs for container status ...
	I0328 22:12:35.613590 1343785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 22:12:35.662071 1343785 logs.go:123] Gathering logs for kubelet ...
	I0328 22:12:35.662100 1343785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0328 22:12:35.690147 1343785 logs.go:138] Found kubelet problem: Mar 28 22:08:17 no-preload-363849 kubelet[750]: W0328 22:08:17.200950     750 reflector.go:547] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-363849" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-363849' and this object
	W0328 22:12:35.690377 1343785 logs.go:138] Found kubelet problem: Mar 28 22:08:17 no-preload-363849 kubelet[750]: E0328 22:08:17.201000     750 reflector.go:150] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-363849" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-363849' and this object
	I0328 22:12:35.749307 1343785 logs.go:123] Gathering logs for kube-proxy [02c2f0a0149bac38e3cc7ba146663278ece69b0c2c72593414ea1aa8114deb43] ...
	I0328 22:12:35.749344 1343785 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02c2f0a0149bac38e3cc7ba146663278ece69b0c2c72593414ea1aa8114deb43"
	I0328 22:12:35.788679 1343785 logs.go:123] Gathering logs for kube-controller-manager [b4dfc8a81959e1e54f5ccd7dbce5e5b79e2e50ea3eb9c6ef60ed4cc3619a0208] ...
	I0328 22:12:35.788710 1343785 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4dfc8a81959e1e54f5ccd7dbce5e5b79e2e50ea3eb9c6ef60ed4cc3619a0208"
	I0328 22:12:35.850080 1343785 logs.go:123] Gathering logs for kube-apiserver [2802d01750454277c58463803cc59154ce9ed35544da7d3b40387123a0807b37] ...
	I0328 22:12:35.850120 1343785 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2802d01750454277c58463803cc59154ce9ed35544da7d3b40387123a0807b37"
	I0328 22:12:35.922295 1343785 logs.go:123] Gathering logs for coredns [ad1c78378fd080073e6d7b4adcb4ebcd3512cdf786702dc1c9708c29fd3c43b7] ...
	I0328 22:12:35.922331 1343785 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ad1c78378fd080073e6d7b4adcb4ebcd3512cdf786702dc1c9708c29fd3c43b7"
	I0328 22:12:35.964277 1343785 logs.go:123] Gathering logs for kindnet [2629b02a9c3dc6762ccb0b6979179bee48869c9043cf2d708a2c66592fb3c3b5] ...
	I0328 22:12:35.964304 1343785 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2629b02a9c3dc6762ccb0b6979179bee48869c9043cf2d708a2c66592fb3c3b5"
	I0328 22:12:36.010621 1343785 logs.go:123] Gathering logs for storage-provisioner [4638f483f335cf1696691ad4ba3f3dc515c5e7c24fb50964ab418406966667e2] ...
	I0328 22:12:36.010658 1343785 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4638f483f335cf1696691ad4ba3f3dc515c5e7c24fb50964ab418406966667e2"
	I0328 22:12:36.058876 1343785 out.go:304] Setting ErrFile to fd 2...
	I0328 22:12:36.058952 1343785 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0328 22:12:36.059039 1343785 out.go:239] X Problems detected in kubelet:
	W0328 22:12:36.059084 1343785 out.go:239]   Mar 28 22:08:17 no-preload-363849 kubelet[750]: W0328 22:08:17.200950     750 reflector.go:547] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-363849" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-363849' and this object
	W0328 22:12:36.059135 1343785 out.go:239]   Mar 28 22:08:17 no-preload-363849 kubelet[750]: E0328 22:08:17.201000     750 reflector.go:150] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-363849" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-363849' and this object
	I0328 22:12:36.059189 1343785 out.go:304] Setting ErrFile to fd 2...
	I0328 22:12:36.059209 1343785 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 22:12:40.727728 1338826 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0328 22:12:40.736874 1338826 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0328 22:12:40.739297 1338826 out.go:177] 
	W0328 22:12:40.741210 1338826 out.go:239] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0328 22:12:40.741248 1338826 out.go:239] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0328 22:12:40.741269 1338826 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0328 22:12:40.741277 1338826 out.go:239] * 
	W0328 22:12:40.742279 1338826 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0328 22:12:40.744622 1338826 out.go:177] 
	
	
	==> CRI-O <==
	Mar 28 22:10:16 old-k8s-version-633693 crio[624]: time="2024-03-28 22:10:16.235825584Z" level=info msg="Removed container 3831dcc3bec2d0a81b186e95090c3fe03274886b562c0b66183b0c047d9b71e4: kubernetes-dashboard/dashboard-metrics-scraper-8d5bb5db8-z2tdh/dashboard-metrics-scraper" id=2b84a530-df89-4dc9-bd11-a318680e3831 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Mar 28 22:10:16 old-k8s-version-633693 crio[624]: time="2024-03-28 22:10:16.687666743Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=b5685d81-810a-4e4f-91e6-a2cf4ea5dc0f name=/runtime.v1alpha2.ImageService/ImageStatus
	Mar 28 22:10:16 old-k8s-version-633693 crio[624]: time="2024-03-28 22:10:16.687900825Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=b5685d81-810a-4e4f-91e6-a2cf4ea5dc0f name=/runtime.v1alpha2.ImageService/ImageStatus
	Mar 28 22:10:29 old-k8s-version-633693 crio[624]: time="2024-03-28 22:10:29.687500903Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=2113241f-1cd0-4543-81ee-d50ea80daf45 name=/runtime.v1alpha2.ImageService/ImageStatus
	Mar 28 22:10:29 old-k8s-version-633693 crio[624]: time="2024-03-28 22:10:29.687763695Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=2113241f-1cd0-4543-81ee-d50ea80daf45 name=/runtime.v1alpha2.ImageService/ImageStatus
	Mar 28 22:10:40 old-k8s-version-633693 crio[624]: time="2024-03-28 22:10:40.687786651Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=1dd46c99-c541-4616-8c72-02bb8266cdb7 name=/runtime.v1alpha2.ImageService/ImageStatus
	Mar 28 22:10:40 old-k8s-version-633693 crio[624]: time="2024-03-28 22:10:40.688020701Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=1dd46c99-c541-4616-8c72-02bb8266cdb7 name=/runtime.v1alpha2.ImageService/ImageStatus
	Mar 28 22:10:52 old-k8s-version-633693 crio[624]: time="2024-03-28 22:10:52.687517149Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=014d584d-f967-44ed-bae3-361ee4515587 name=/runtime.v1alpha2.ImageService/ImageStatus
	Mar 28 22:10:52 old-k8s-version-633693 crio[624]: time="2024-03-28 22:10:52.687752996Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=014d584d-f967-44ed-bae3-361ee4515587 name=/runtime.v1alpha2.ImageService/ImageStatus
	Mar 28 22:11:07 old-k8s-version-633693 crio[624]: time="2024-03-28 22:11:07.687733793Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=1c2ba994-cfc9-4389-bcf0-4b81929dd556 name=/runtime.v1alpha2.ImageService/ImageStatus
	Mar 28 22:11:07 old-k8s-version-633693 crio[624]: time="2024-03-28 22:11:07.687968745Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=1c2ba994-cfc9-4389-bcf0-4b81929dd556 name=/runtime.v1alpha2.ImageService/ImageStatus
	Mar 28 22:11:21 old-k8s-version-633693 crio[624]: time="2024-03-28 22:11:21.687483586Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=83a8b7f1-a71a-412b-9813-cc2c8d0290cf name=/runtime.v1alpha2.ImageService/ImageStatus
	Mar 28 22:11:21 old-k8s-version-633693 crio[624]: time="2024-03-28 22:11:21.687708569Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=83a8b7f1-a71a-412b-9813-cc2c8d0290cf name=/runtime.v1alpha2.ImageService/ImageStatus
	Mar 28 22:11:36 old-k8s-version-633693 crio[624]: time="2024-03-28 22:11:36.687486735Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=4aca201e-819c-4385-8324-4d7a6d2d6d3a name=/runtime.v1alpha2.ImageService/ImageStatus
	Mar 28 22:11:36 old-k8s-version-633693 crio[624]: time="2024-03-28 22:11:36.687718651Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=4aca201e-819c-4385-8324-4d7a6d2d6d3a name=/runtime.v1alpha2.ImageService/ImageStatus
	Mar 28 22:11:37 old-k8s-version-633693 crio[624]: time="2024-03-28 22:11:37.735687797Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=df9ebd51-7d04-4ace-ae9d-ae486e6df83f name=/runtime.v1alpha2.ImageService/ImageStatus
	Mar 28 22:11:37 old-k8s-version-633693 crio[624]: time="2024-03-28 22:11:37.735933178Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c,RepoTags:[k8s.gcr.io/pause:3.2 registry.k8s.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:31d3efd12022ffeffb3146bc10ae8beb890c80ed2f07363515580add7ed47636 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f registry.k8s.io/pause@sha256:31d3efd12022ffeffb3146bc10ae8beb890c80ed2f07363515580add7ed47636 registry.k8s.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:489397,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=df9ebd51-7d04-4ace-ae9d-ae486e6df83f name=/runtime.v1alpha2.ImageService/ImageStatus
	Mar 28 22:11:51 old-k8s-version-633693 crio[624]: time="2024-03-28 22:11:51.688002610Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=d9bc7e68-96cb-47bb-94df-b5601762ab01 name=/runtime.v1alpha2.ImageService/ImageStatus
	Mar 28 22:11:51 old-k8s-version-633693 crio[624]: time="2024-03-28 22:11:51.688912503Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=d9bc7e68-96cb-47bb-94df-b5601762ab01 name=/runtime.v1alpha2.ImageService/ImageStatus
	Mar 28 22:12:02 old-k8s-version-633693 crio[624]: time="2024-03-28 22:12:02.687540585Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=fa883e51-fbf8-4209-9d57-35554b411001 name=/runtime.v1alpha2.ImageService/ImageStatus
	Mar 28 22:12:02 old-k8s-version-633693 crio[624]: time="2024-03-28 22:12:02.687768218Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=fa883e51-fbf8-4209-9d57-35554b411001 name=/runtime.v1alpha2.ImageService/ImageStatus
	Mar 28 22:12:17 old-k8s-version-633693 crio[624]: time="2024-03-28 22:12:17.688645888Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=4857a7aa-a3ed-490b-bac0-0b15c0ffec9d name=/runtime.v1alpha2.ImageService/ImageStatus
	Mar 28 22:12:17 old-k8s-version-633693 crio[624]: time="2024-03-28 22:12:17.689110968Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=4857a7aa-a3ed-490b-bac0-0b15c0ffec9d name=/runtime.v1alpha2.ImageService/ImageStatus
	Mar 28 22:12:30 old-k8s-version-633693 crio[624]: time="2024-03-28 22:12:30.688843471Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=82ec3e1c-b590-4205-8039-0e2c5386e7d4 name=/runtime.v1alpha2.ImageService/ImageStatus
	Mar 28 22:12:30 old-k8s-version-633693 crio[624]: time="2024-03-28 22:12:30.689151588Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=82ec3e1c-b590-4205-8039-0e2c5386e7d4 name=/runtime.v1alpha2.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	cc607e36d8914       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           2 minutes ago       Exited              dashboard-metrics-scraper   5                   3457d31664405       dashboard-metrics-scraper-8d5bb5db8-z2tdh
	dbebe9c9214b8       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           5 minutes ago       Running             storage-provisioner         1                   c93c82b167969       storage-provisioner
	dcb78e1b74650       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93   5 minutes ago       Running             kubernetes-dashboard        0                   117fb2e21ac1b       kubernetes-dashboard-cd95d586-ckjdn
	9a5cc20f063ad       db91994f4ee8f894a1e8a6c1a76f615da8fc3c019300a3686291ce6fcbc57895                                           5 minutes ago       Running             coredns                     0                   b3bf2ede41d2a       coredns-74ff55c5b-rq6t8
	99963dd8c6223       25a5233254979d0678a2db1d15b76b73dc380d81bc5eed93916ba5638b3cd894                                           5 minutes ago       Running             kube-proxy                  0                   1ed9b3b46e264       kube-proxy-9vs8r
	de9034878b9e7       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           5 minutes ago       Exited              storage-provisioner         0                   c93c82b167969       storage-provisioner
	0ca17af26be7e       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           5 minutes ago       Running             busybox                     0                   16f14e1362953       busybox
	f410c25189f5f       4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d                                           5 minutes ago       Running             kindnet-cni                 0                   bc5a05d743cb5       kindnet-md4h4
	aafa2b2860b7d       2c08bbbc02d3aa5dfbf4e79f15c0a61424049288917aa10364464ca1f7de7157                                           6 minutes ago       Running             kube-apiserver              0                   622f934704238       kube-apiserver-old-k8s-version-633693
	accf5dfc23ea0       1df8a2b116bd16f7070fd383a6769c8d644b365575e8ffa3e492b84e4f05fc74                                           6 minutes ago       Running             kube-controller-manager     0                   6592fa42f41e8       kube-controller-manager-old-k8s-version-633693
	54454ca60825d       05b738aa1bc6355db8a2ee8639f3631b908286e43f584a3d2ee0c472de033c28                                           6 minutes ago       Running             etcd                        0                   108fd5b0c4944       etcd-old-k8s-version-633693
	0c1c9dee5f4ce       e7605f88f17d6a4c3f083ef9c6f5f19b39f87e4d4406a05a8612b54a6ea57051                                           6 minutes ago       Running             kube-scheduler              0                   80f6dddfa44ca       kube-scheduler-old-k8s-version-633693
	
	
	==> coredns [9a5cc20f063ad7f3789223916297cb6dbac2af4ce58e82d5bde29f4655036962] <==
	I0328 22:07:21.732859       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-03-28 22:06:51.730641281 +0000 UTC m=+0.032486899) (total time: 30.002091471s):
	Trace[2019727887]: [30.002091471s] [30.002091471s] END
	E0328 22:07:21.736287       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0328 22:07:21.733123       1 trace.go:116] Trace[1427131847]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-03-28 22:06:51.731367528 +0000 UTC m=+0.033213155) (total time: 30.001735543s):
	Trace[1427131847]: [30.001735543s] [30.001735543s] END
	I0328 22:07:21.736222       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-03-28 22:06:51.731177293 +0000 UTC m=+0.033022912) (total time: 30.005007209s):
	Trace[939984059]: [30.005007209s] [30.005007209s] END
	E0328 22:07:21.736637       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	E0328 22:07:21.736625       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:52564 - 62559 "HINFO IN 6871333457023326095.2102777586334840463. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.027615718s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:52738 - 53427 "HINFO IN 7547922539308925481.753229652630310616. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.062798985s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-633693
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-633693
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2883ffbf70a3cdb38617e0fd1a9bb421b3d79967
	                    minikube.k8s.io/name=old-k8s-version-633693
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_28T22_04_23_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 28 Mar 2024 22:04:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-633693
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 28 Mar 2024 22:12:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 28 Mar 2024 22:12:40 +0000   Thu, 28 Mar 2024 22:04:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 28 Mar 2024 22:12:40 +0000   Thu, 28 Mar 2024 22:04:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 28 Mar 2024 22:12:40 +0000   Thu, 28 Mar 2024 22:04:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 28 Mar 2024 22:12:40 +0000   Thu, 28 Mar 2024 22:05:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-633693
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	System Info:
	  Machine ID:                 5e11c1992c8e4398b6f23ee558d7ede7
	  System UUID:                50ff53cc-2070-40db-a887-822f52e8dbfa
	  Boot ID:                    18dd0f92-d332-41a7-aacd-d07143d316b2
	  Kernel Version:             5.15.0-1056-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m45s
	  kube-system                 coredns-74ff55c5b-rq6t8                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     8m3s
	  kube-system                 etcd-old-k8s-version-633693                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         8m11s
	  kube-system                 kindnet-md4h4                                     100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      8m3s
	  kube-system                 kube-apiserver-old-k8s-version-633693             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m11s
	  kube-system                 kube-controller-manager-old-k8s-version-633693    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m11s
	  kube-system                 kube-proxy-9vs8r                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m3s
	  kube-system                 kube-scheduler-old-k8s-version-633693             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m11s
	  kube-system                 metrics-server-9975d5f86-h5ts8                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (2%!)(MISSING)       0 (0%!)(MISSING)         6m34s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m1s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-z2tdh         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m35s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-ckjdn               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%!)(MISSING)  100m (5%!)(MISSING)
	  memory             420Mi (5%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  100Mi (0%!)(MISSING)  0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  Starting                 8m11s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m11s                kubelet     Node old-k8s-version-633693 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m11s                kubelet     Node old-k8s-version-633693 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m11s                kubelet     Node old-k8s-version-633693 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m1s                 kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                7m21s                kubelet     Node old-k8s-version-633693 status is now: NodeReady
	  Normal  Starting                 6m5s                 kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m5s (x8 over 6m5s)  kubelet     Node old-k8s-version-633693 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m5s (x8 over 6m5s)  kubelet     Node old-k8s-version-633693 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m5s (x8 over 6m5s)  kubelet     Node old-k8s-version-633693 status is now: NodeHasSufficientPID
	  Normal  Starting                 5m51s                kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[  +0.001077] FS-Cache: N-key=[8] 'eb405c0100000000'
	[  +0.002412] FS-Cache: Duplicate cookie detected
	[  +0.000778] FS-Cache: O-cookie c=0000004e [p=0000004b fl=226 nc=0 na=1]
	[  +0.001057] FS-Cache: O-cookie d=00000000d4269778{9p.inode} n=000000005495c988
	[  +0.001112] FS-Cache: O-key=[8] 'eb405c0100000000'
	[  +0.000707] FS-Cache: N-cookie c=00000055 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000932] FS-Cache: N-cookie d=00000000d4269778{9p.inode} n=00000000b20bf3f1
	[  +0.001084] FS-Cache: N-key=[8] 'eb405c0100000000'
	[  +2.196103] FS-Cache: Duplicate cookie detected
	[  +0.000725] FS-Cache: O-cookie c=0000004c [p=0000004b fl=226 nc=0 na=1]
	[  +0.001008] FS-Cache: O-cookie d=00000000d4269778{9p.inode} n=00000000484a23fd
	[  +0.001157] FS-Cache: O-key=[8] 'ea405c0100000000'
	[  +0.000721] FS-Cache: N-cookie c=00000057 [p=0000004b fl=2 nc=0 na=1]
	[  +0.001012] FS-Cache: N-cookie d=00000000d4269778{9p.inode} n=00000000d5b6d522
	[  +0.001041] FS-Cache: N-key=[8] 'ea405c0100000000'
	[  +0.350982] FS-Cache: Duplicate cookie detected
	[  +0.000711] FS-Cache: O-cookie c=00000051 [p=0000004b fl=226 nc=0 na=1]
	[  +0.001046] FS-Cache: O-cookie d=00000000d4269778{9p.inode} n=00000000baccf953
	[  +0.001062] FS-Cache: O-key=[8] 'f0405c0100000000'
	[  +0.000821] FS-Cache: N-cookie c=00000058 [p=0000004b fl=2 nc=0 na=1]
	[  +0.001037] FS-Cache: N-cookie d=00000000d4269778{9p.inode} n=00000000d865c146
	[  +0.001044] FS-Cache: N-key=[8] 'f0405c0100000000'
	[Mar28 21:58] overlayfs: '/var/lib/containers/storage/overlay/l/7FOWQIVXOWACA56BLQVF4JJOLY' not a directory
	[  +0.028912] overlayfs: '/var/lib/containers/storage/overlay/l/7FOWQIVXOWACA56BLQVF4JJOLY' not a directory
	[  +0.101611] overlayfs: '/var/lib/containers/storage/overlay/l/7FOWQIVXOWACA56BLQVF4JJOLY' not a directory
	
	
	==> etcd [54454ca60825d1d8afde9d3954763066222f962cd282dd4ae5776677f1d02263] <==
	2024-03-28 22:08:36.503948 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 22:08:46.504037 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 22:08:56.504145 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 22:09:06.503982 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 22:09:16.507902 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 22:09:26.504056 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 22:09:36.504021 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 22:09:46.504117 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 22:09:56.503971 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 22:10:06.503971 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 22:10:16.503992 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 22:10:26.504043 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 22:10:36.504039 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 22:10:46.503949 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 22:10:56.503945 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 22:11:06.503934 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 22:11:16.504020 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 22:11:26.503949 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 22:11:36.504150 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 22:11:46.504010 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 22:11:56.503928 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 22:12:06.503936 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 22:12:16.504320 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 22:12:26.504018 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-28 22:12:36.503978 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 22:12:42 up  5:55,  0 users,  load average: 0.28, 1.43, 2.06
	Linux old-k8s-version-633693 5.15.0-1056-aws #61~20.04.1-Ubuntu SMP Wed Mar 13 17:45:04 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [f410c25189f5f56842f0d6f5e959b565b53ee6a8d5258db914882d1e38e9ab2b] <==
	I0328 22:10:40.527418       1 main.go:227] handling current node
	I0328 22:10:50.534515       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0328 22:10:50.534549       1 main.go:227] handling current node
	I0328 22:11:00.549560       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0328 22:11:00.549591       1 main.go:227] handling current node
	I0328 22:11:10.563873       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0328 22:11:10.563900       1 main.go:227] handling current node
	I0328 22:11:20.573311       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0328 22:11:20.573349       1 main.go:227] handling current node
	I0328 22:11:30.587875       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0328 22:11:30.587905       1 main.go:227] handling current node
	I0328 22:11:40.602049       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0328 22:11:40.602077       1 main.go:227] handling current node
	I0328 22:11:50.607661       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0328 22:11:50.607690       1 main.go:227] handling current node
	I0328 22:12:00.620347       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0328 22:12:00.620375       1 main.go:227] handling current node
	I0328 22:12:10.625288       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0328 22:12:10.625315       1 main.go:227] handling current node
	I0328 22:12:20.630840       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0328 22:12:20.630869       1 main.go:227] handling current node
	I0328 22:12:30.645087       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0328 22:12:30.645115       1 main.go:227] handling current node
	I0328 22:12:40.652051       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0328 22:12:40.652295       1 main.go:227] handling current node
	
	
	==> kube-apiserver [aafa2b2860b7d02f5f63f8f79e578efc4b8a612e8845f3c01b98f89c881a05f5] <==
	I0328 22:09:13.676511       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0328 22:09:13.676530       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0328 22:09:51.086043       1 client.go:360] parsed scheme: "passthrough"
	I0328 22:09:51.086090       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0328 22:09:51.086100       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0328 22:09:52.391431       1 handler_proxy.go:102] no RequestInfo found in the context
	E0328 22:09:52.391500       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0328 22:09:52.391507       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0328 22:10:28.319216       1 client.go:360] parsed scheme: "passthrough"
	I0328 22:10:28.319266       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0328 22:10:28.319276       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0328 22:11:07.557977       1 client.go:360] parsed scheme: "passthrough"
	I0328 22:11:07.558042       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0328 22:11:07.558051       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0328 22:11:38.387259       1 client.go:360] parsed scheme: "passthrough"
	I0328 22:11:38.387307       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0328 22:11:38.387316       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0328 22:11:48.921809       1 handler_proxy.go:102] no RequestInfo found in the context
	E0328 22:11:48.921983       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0328 22:11:48.921997       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0328 22:12:16.314600       1 client.go:360] parsed scheme: "passthrough"
	I0328 22:12:16.314639       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0328 22:12:16.314648       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [accf5dfc23ea01d34e6a9b748711dcadd8f2af898f72bbc4348b539768308591] <==
	W0328 22:08:14.932376       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0328 22:08:39.185632       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0328 22:08:46.582968       1 request.go:655] Throttling request took 1.04849168s, request: GET:https://192.168.76.2:8443/apis/authorization.k8s.io/v1?timeout=32s
	W0328 22:08:47.434141       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0328 22:09:09.688625       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0328 22:09:19.084528       1 request.go:655] Throttling request took 1.048062666s, request: GET:https://192.168.76.2:8443/apis/networking.k8s.io/v1beta1?timeout=32s
	W0328 22:09:19.942080       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0328 22:09:40.190880       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0328 22:09:51.592448       1 request.go:655] Throttling request took 1.04847541s, request: GET:https://192.168.76.2:8443/apis/scheduling.k8s.io/v1?timeout=32s
	W0328 22:09:52.443843       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0328 22:10:10.692737       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0328 22:10:24.094279       1 request.go:655] Throttling request took 1.048451825s, request: GET:https://192.168.76.2:8443/apis/apiextensions.k8s.io/v1?timeout=32s
	W0328 22:10:24.945527       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0328 22:10:41.194506       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0328 22:10:56.595971       1 request.go:655] Throttling request took 1.048380995s, request: GET:https://192.168.76.2:8443/apis/authorization.k8s.io/v1beta1?timeout=32s
	W0328 22:10:57.447420       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0328 22:11:11.696398       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0328 22:11:29.097825       1 request.go:655] Throttling request took 1.048417602s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0328 22:11:29.949243       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0328 22:11:42.198668       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0328 22:12:01.599601       1 request.go:655] Throttling request took 1.048262636s, request: GET:https://192.168.76.2:8443/apis/apiextensions.k8s.io/v1beta1?timeout=32s
	W0328 22:12:02.450926       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0328 22:12:12.700723       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0328 22:12:34.101321       1 request.go:655] Throttling request took 1.048180728s, request: GET:https://192.168.76.2:8443/apis/events.k8s.io/v1?timeout=32s
	W0328 22:12:34.952876       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	
	==> kube-proxy [99963dd8c6223acf3495ce9e9c35c7f4c45a97d49ba1435654dd656d82537a95] <==
	I0328 22:04:41.239441       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0328 22:04:41.239543       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0328 22:04:41.251643       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0328 22:04:41.251732       1 server_others.go:185] Using iptables Proxier.
	I0328 22:04:41.252027       1 server.go:650] Version: v1.20.0
	I0328 22:04:41.252830       1 config.go:315] Starting service config controller
	I0328 22:04:41.252846       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0328 22:04:41.252871       1 config.go:224] Starting endpoint slice config controller
	I0328 22:04:41.252875       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0328 22:04:41.360645       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0328 22:04:41.360713       1 shared_informer.go:247] Caches are synced for service config 
	I0328 22:06:51.536416       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0328 22:06:51.536669       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0328 22:06:51.551158       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0328 22:06:51.551340       1 server_others.go:185] Using iptables Proxier.
	I0328 22:06:51.552075       1 server.go:650] Version: v1.20.0
	I0328 22:06:51.553898       1 config.go:315] Starting service config controller
	I0328 22:06:51.553963       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0328 22:06:51.553987       1 config.go:224] Starting endpoint slice config controller
	I0328 22:06:51.553991       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0328 22:06:51.660173       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0328 22:06:51.660401       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-scheduler [0c1c9dee5f4ce24a9e91181634895b593f2aec0d21a11adbe16ddd1adce82d0b] <==
	E0328 22:04:20.811852       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0328 22:04:20.959695       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0328 22:04:21.380035       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	I0328 22:06:41.191130       1 serving.go:331] Generated self-signed cert in-memory
	W0328 22:06:47.663678       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0328 22:06:47.663705       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0328 22:06:47.663773       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0328 22:06:47.663780       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0328 22:06:47.825934       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0328 22:06:47.839769       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0328 22:06:47.840611       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0328 22:06:47.848851       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0328 22:06:47.848623       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0328 22:06:47.879828       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0328 22:06:47.880581       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0328 22:06:47.881564       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0328 22:06:47.881680       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0328 22:06:47.881710       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0328 22:06:47.881889       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0328 22:06:47.882029       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0328 22:06:47.888143       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0328 22:06:47.892591       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0328 22:06:47.892690       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0328 22:06:47.904511       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0328 22:06:49.151631       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Mar 28 22:10:57 old-k8s-version-633693 kubelet[742]: E0328 22:10:57.687778     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	Mar 28 22:11:07 old-k8s-version-633693 kubelet[742]: E0328 22:11:07.688569     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 28 22:11:10 old-k8s-version-633693 kubelet[742]: I0328 22:11:10.687074     742 scope.go:95] [topologymanager] RemoveContainer - Container ID: cc607e36d891433dcc76acec8be25b4ec9f6ac4b539a4302bb9b71c200e74ecc
	Mar 28 22:11:10 old-k8s-version-633693 kubelet[742]: E0328 22:11:10.687505     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	Mar 28 22:11:21 old-k8s-version-633693 kubelet[742]: E0328 22:11:21.688528     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 28 22:11:25 old-k8s-version-633693 kubelet[742]: I0328 22:11:25.687115     742 scope.go:95] [topologymanager] RemoveContainer - Container ID: cc607e36d891433dcc76acec8be25b4ec9f6ac4b539a4302bb9b71c200e74ecc
	Mar 28 22:11:25 old-k8s-version-633693 kubelet[742]: E0328 22:11:25.687833     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	Mar 28 22:11:36 old-k8s-version-633693 kubelet[742]: E0328 22:11:36.687961     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 28 22:11:37 old-k8s-version-633693 kubelet[742]: E0328 22:11:37.743985     742 container_manager_linux.go:533] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /docker/fe636ffa3a97a0d6d85a45fcf40320435f4f1adeb9e541a502313138a659e23d, memory: /docker/fe636ffa3a97a0d6d85a45fcf40320435f4f1adeb9e541a502313138a659e23d/system.slice/kubelet.service
	Mar 28 22:11:39 old-k8s-version-633693 kubelet[742]: I0328 22:11:39.687228     742 scope.go:95] [topologymanager] RemoveContainer - Container ID: cc607e36d891433dcc76acec8be25b4ec9f6ac4b539a4302bb9b71c200e74ecc
	Mar 28 22:11:39 old-k8s-version-633693 kubelet[742]: E0328 22:11:39.687565     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	Mar 28 22:11:51 old-k8s-version-633693 kubelet[742]: E0328 22:11:51.689147     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 28 22:11:54 old-k8s-version-633693 kubelet[742]: I0328 22:11:54.687105     742 scope.go:95] [topologymanager] RemoveContainer - Container ID: cc607e36d891433dcc76acec8be25b4ec9f6ac4b539a4302bb9b71c200e74ecc
	Mar 28 22:11:54 old-k8s-version-633693 kubelet[742]: E0328 22:11:54.687465     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	Mar 28 22:12:02 old-k8s-version-633693 kubelet[742]: E0328 22:12:02.688202     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 28 22:12:06 old-k8s-version-633693 kubelet[742]: I0328 22:12:06.687102     742 scope.go:95] [topologymanager] RemoveContainer - Container ID: cc607e36d891433dcc76acec8be25b4ec9f6ac4b539a4302bb9b71c200e74ecc
	Mar 28 22:12:06 old-k8s-version-633693 kubelet[742]: E0328 22:12:06.687449     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	Mar 28 22:12:17 old-k8s-version-633693 kubelet[742]: I0328 22:12:17.687398     742 scope.go:95] [topologymanager] RemoveContainer - Container ID: cc607e36d891433dcc76acec8be25b4ec9f6ac4b539a4302bb9b71c200e74ecc
	Mar 28 22:12:17 old-k8s-version-633693 kubelet[742]: E0328 22:12:17.687727     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	Mar 28 22:12:17 old-k8s-version-633693 kubelet[742]: E0328 22:12:17.689382     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 28 22:12:30 old-k8s-version-633693 kubelet[742]: I0328 22:12:30.687217     742 scope.go:95] [topologymanager] RemoveContainer - Container ID: cc607e36d891433dcc76acec8be25b4ec9f6ac4b539a4302bb9b71c200e74ecc
	Mar 28 22:12:30 old-k8s-version-633693 kubelet[742]: E0328 22:12:30.687670     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	Mar 28 22:12:30 old-k8s-version-633693 kubelet[742]: E0328 22:12:30.689802     742 pod_workers.go:191] Error syncing pod fb7522bf-a2a3-485c-b715-79d144a23abd ("metrics-server-9975d5f86-h5ts8_kube-system(fb7522bf-a2a3-485c-b715-79d144a23abd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 28 22:12:41 old-k8s-version-633693 kubelet[742]: I0328 22:12:41.687199     742 scope.go:95] [topologymanager] RemoveContainer - Container ID: cc607e36d891433dcc76acec8be25b4ec9f6ac4b539a4302bb9b71c200e74ecc
	Mar 28 22:12:41 old-k8s-version-633693 kubelet[742]: E0328 22:12:41.687544     742 pod_workers.go:191] Error syncing pod 48d62b74-b3d8-40ca-9a76-635584f8ffd5 ("dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z2tdh_kubernetes-dashboard(48d62b74-b3d8-40ca-9a76-635584f8ffd5)"
	
	
	==> kubernetes-dashboard [dcb78e1b7465099ed99741a848314cba3d027ecfab8e44b022dc85231ab9a26d] <==
	2024/03/28 22:07:21 Using namespace: kubernetes-dashboard
	2024/03/28 22:07:21 Using in-cluster config to connect to apiserver
	2024/03/28 22:07:21 Using secret token for csrf signing
	2024/03/28 22:07:21 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/03/28 22:07:21 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/03/28 22:07:21 Successful initial request to the apiserver, version: v1.20.0
	2024/03/28 22:07:21 Generating JWE encryption key
	2024/03/28 22:07:21 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/03/28 22:07:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/03/28 22:07:21 Initializing JWE encryption key from synchronized object
	2024/03/28 22:07:21 Creating in-cluster Sidecar client
	2024/03/28 22:07:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/28 22:07:21 Serving insecurely on HTTP port: 9090
	2024/03/28 22:07:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/28 22:08:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/28 22:08:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/28 22:09:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/28 22:09:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/28 22:10:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/28 22:10:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/28 22:11:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/28 22:11:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/28 22:12:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/28 22:07:21 Starting overwatch
	
	
	==> storage-provisioner [dbebe9c9214b8446dd7e1ba58af2dcdda839029f75a4df5d16fce02680f0f723] <==
	I0328 22:07:22.068755       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0328 22:07:22.085681       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0328 22:07:22.085818       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0328 22:07:39.533702       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0328 22:07:39.533877       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-633693_55f8dc27-b24e-4917-9908-450b2a25a860!
	I0328 22:07:39.534263       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1b46d884-d612-44f4-b871-e29fcd11c9e4", APIVersion:"v1", ResourceVersion:"797", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-633693_55f8dc27-b24e-4917-9908-450b2a25a860 became leader
	I0328 22:07:39.634313       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-633693_55f8dc27-b24e-4917-9908-450b2a25a860!
	
	
	==> storage-provisioner [de9034878b9e7c8d2e40e141b1c21f2a76586667af994d8465e9601d126955e0] <==
	I0328 22:05:26.391048       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0328 22:05:26.408457       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0328 22:05:26.408506       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0328 22:05:26.452948       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0328 22:05:26.453255       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-633693_3b8e925e-d2df-4db5-bed6-d59560f69ae6!
	I0328 22:05:26.457828       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1b46d884-d612-44f4-b871-e29fcd11c9e4", APIVersion:"v1", ResourceVersion:"489", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-633693_3b8e925e-d2df-4db5-bed6-d59560f69ae6 became leader
	I0328 22:05:26.556969       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-633693_3b8e925e-d2df-4db5-bed6-d59560f69ae6!
	I0328 22:06:51.402512       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0328 22:07:21.408539       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-633693 -n old-k8s-version-633693
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-633693 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-h5ts8
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-633693 describe pod metrics-server-9975d5f86-h5ts8
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-633693 describe pod metrics-server-9975d5f86-h5ts8: exit status 1 (88.919841ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-9975d5f86-h5ts8" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-633693 describe pod metrics-server-9975d5f86-h5ts8: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (383.22s)

                                                
                                    

Test pass (301/335)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 11.51
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.23
9 TestDownloadOnly/v1.20.0/DeleteAll 0.19
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.29.3/json-events 7.73
13 TestDownloadOnly/v1.29.3/preload-exists 0
17 TestDownloadOnly/v1.29.3/LogsDuration 0.09
18 TestDownloadOnly/v1.29.3/DeleteAll 0.21
19 TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds 0.13
21 TestDownloadOnly/v1.30.0-beta.0/json-events 10.75
22 TestDownloadOnly/v1.30.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.30.0-beta.0/LogsDuration 0.08
27 TestDownloadOnly/v1.30.0-beta.0/DeleteAll 0.2
28 TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds 0.13
30 TestBinaryMirror 0.53
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.09
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.1
36 TestAddons/Setup 171.52
38 TestAddons/parallel/Registry 16.42
40 TestAddons/parallel/InspektorGadget 11.79
41 TestAddons/parallel/MetricsServer 5.8
44 TestAddons/parallel/CSI 59.85
45 TestAddons/parallel/Headlamp 11.03
46 TestAddons/parallel/CloudSpanner 6.63
47 TestAddons/parallel/LocalPath 9.56
48 TestAddons/parallel/NvidiaDevicePlugin 5.54
49 TestAddons/parallel/Yakd 6.01
52 TestAddons/serial/GCPAuth/Namespaces 0.21
53 TestAddons/StoppedEnableDisable 12.36
54 TestCertOptions 34.11
55 TestCertExpiration 239.45
57 TestForceSystemdFlag 40.84
58 TestForceSystemdEnv 45.78
64 TestErrorSpam/setup 34.01
65 TestErrorSpam/start 0.76
66 TestErrorSpam/status 0.96
67 TestErrorSpam/pause 1.72
68 TestErrorSpam/unpause 1.78
69 TestErrorSpam/stop 1.44
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 77.54
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 29.41
76 TestFunctional/serial/KubeContext 0.07
77 TestFunctional/serial/KubectlGetPods 0.11
80 TestFunctional/serial/CacheCmd/cache/add_remote 3.66
81 TestFunctional/serial/CacheCmd/cache/add_local 1.12
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
83 TestFunctional/serial/CacheCmd/cache/list 0.07
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
85 TestFunctional/serial/CacheCmd/cache/cache_reload 1.95
86 TestFunctional/serial/CacheCmd/cache/delete 0.14
87 TestFunctional/serial/MinikubeKubectlCmd 0.16
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.17
89 TestFunctional/serial/ExtraConfig 40.33
90 TestFunctional/serial/ComponentHealth 0.09
91 TestFunctional/serial/LogsCmd 1.68
92 TestFunctional/serial/LogsFileCmd 1.71
93 TestFunctional/serial/InvalidService 4.34
95 TestFunctional/parallel/ConfigCmd 0.56
96 TestFunctional/parallel/DashboardCmd 11.35
97 TestFunctional/parallel/DryRun 0.44
98 TestFunctional/parallel/InternationalLanguage 0.19
99 TestFunctional/parallel/StatusCmd 1.15
103 TestFunctional/parallel/ServiceCmdConnect 11.69
104 TestFunctional/parallel/AddonsCmd 0.23
105 TestFunctional/parallel/PersistentVolumeClaim 25.82
107 TestFunctional/parallel/SSHCmd 0.71
108 TestFunctional/parallel/CpCmd 2.24
110 TestFunctional/parallel/FileSync 0.35
111 TestFunctional/parallel/CertSync 2.54
115 TestFunctional/parallel/NodeLabels 0.13
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.7
119 TestFunctional/parallel/License 0.33
121 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.58
122 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
124 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.36
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.1
126 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
130 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
131 TestFunctional/parallel/ServiceCmd/DeployApp 7.22
132 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
133 TestFunctional/parallel/ProfileCmd/profile_list 0.43
134 TestFunctional/parallel/ProfileCmd/profile_json_output 0.38
135 TestFunctional/parallel/MountCmd/any-port 7.32
136 TestFunctional/parallel/ServiceCmd/List 0.59
137 TestFunctional/parallel/ServiceCmd/JSONOutput 0.58
138 TestFunctional/parallel/ServiceCmd/HTTPS 0.47
139 TestFunctional/parallel/ServiceCmd/Format 0.41
140 TestFunctional/parallel/ServiceCmd/URL 0.45
141 TestFunctional/parallel/MountCmd/specific-port 2.27
142 TestFunctional/parallel/MountCmd/VerifyCleanup 2.29
143 TestFunctional/parallel/Version/short 0.1
144 TestFunctional/parallel/Version/components 1.21
145 TestFunctional/parallel/ImageCommands/ImageListShort 0.25
146 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
147 TestFunctional/parallel/ImageCommands/ImageListJson 0.32
148 TestFunctional/parallel/ImageCommands/ImageListYaml 0.3
149 TestFunctional/parallel/ImageCommands/ImageBuild 2.79
150 TestFunctional/parallel/ImageCommands/Setup 1.84
151 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 6.22
152 TestFunctional/parallel/UpdateContextCmd/no_changes 0.23
153 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.19
154 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.19
155 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.28
156 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.48
157 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.88
158 TestFunctional/parallel/ImageCommands/ImageRemove 0.53
159 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.28
160 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.91
161 TestFunctional/delete_addon-resizer_images 0.07
162 TestFunctional/delete_my-image_image 0.01
163 TestFunctional/delete_minikube_cached_images 0.01
167 TestMultiControlPlane/serial/StartCluster 158.77
168 TestMultiControlPlane/serial/DeployApp 7.2
169 TestMultiControlPlane/serial/PingHostFromPods 1.79
170 TestMultiControlPlane/serial/AddWorkerNode 54.56
171 TestMultiControlPlane/serial/NodeLabels 0.1
172 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.82
173 TestMultiControlPlane/serial/CopyFile 19.47
174 TestMultiControlPlane/serial/StopSecondaryNode 12.76
175 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.56
176 TestMultiControlPlane/serial/RestartSecondaryNode 34.36
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 5.48
178 TestMultiControlPlane/serial/RestartClusterKeepsNodes 212.09
179 TestMultiControlPlane/serial/DeleteSecondaryNode 12.96
180 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.53
181 TestMultiControlPlane/serial/StopCluster 35.73
182 TestMultiControlPlane/serial/RestartCluster 120.11
183 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.55
184 TestMultiControlPlane/serial/AddSecondaryNode 62.34
185 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.78
189 TestJSONOutput/start/Command 79.78
190 TestJSONOutput/start/Audit 0
192 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/pause/Command 0.74
196 TestJSONOutput/pause/Audit 0
198 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/unpause/Command 0.68
202 TestJSONOutput/unpause/Audit 0
204 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
207 TestJSONOutput/stop/Command 5.77
208 TestJSONOutput/stop/Audit 0
210 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
211 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
212 TestErrorJSONOutput 0.23
214 TestKicCustomNetwork/create_custom_network 44.04
215 TestKicCustomNetwork/use_default_bridge_network 34.75
216 TestKicExistingNetwork 33.14
217 TestKicCustomSubnet 32.96
218 TestKicStaticIP 34.01
219 TestMainNoArgs 0.06
220 TestMinikubeProfile 71.58
223 TestMountStart/serial/StartWithMountFirst 7.09
224 TestMountStart/serial/VerifyMountFirst 0.26
225 TestMountStart/serial/StartWithMountSecond 6.58
226 TestMountStart/serial/VerifyMountSecond 0.26
227 TestMountStart/serial/DeleteFirst 1.59
228 TestMountStart/serial/VerifyMountPostDelete 0.28
229 TestMountStart/serial/Stop 1.21
230 TestMountStart/serial/RestartStopped 7.81
231 TestMountStart/serial/VerifyMountPostStop 0.26
234 TestMultiNode/serial/FreshStart2Nodes 119.89
235 TestMultiNode/serial/DeployApp2Nodes 5.8
236 TestMultiNode/serial/PingHostFrom2Pods 1.04
237 TestMultiNode/serial/AddNode 49.36
238 TestMultiNode/serial/MultiNodeLabels 0.1
239 TestMultiNode/serial/ProfileList 0.33
240 TestMultiNode/serial/CopyFile 10.35
241 TestMultiNode/serial/StopNode 2.3
242 TestMultiNode/serial/StartAfterStop 9.85
243 TestMultiNode/serial/RestartKeepsNodes 103.76
244 TestMultiNode/serial/DeleteNode 5.57
245 TestMultiNode/serial/StopMultiNode 23.81
246 TestMultiNode/serial/RestartMultiNode 64.71
247 TestMultiNode/serial/ValidateNameConflict 35.46
252 TestPreload 121.3
254 TestScheduledStopUnix 107.63
257 TestInsufficientStorage 10.65
258 TestRunningBinaryUpgrade 78.94
260 TestKubernetesUpgrade 396.53
261 TestMissingContainerUpgrade 152.52
263 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
264 TestNoKubernetes/serial/StartWithK8s 40.33
265 TestNoKubernetes/serial/StartWithStopK8s 13.72
266 TestNoKubernetes/serial/Start 10.69
267 TestNoKubernetes/serial/VerifyK8sNotRunning 0.41
268 TestNoKubernetes/serial/ProfileList 5.86
269 TestNoKubernetes/serial/Stop 1.28
270 TestNoKubernetes/serial/StartNoArgs 6.96
271 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
272 TestStoppedBinaryUpgrade/Setup 1.11
273 TestStoppedBinaryUpgrade/Upgrade 76.46
274 TestStoppedBinaryUpgrade/MinikubeLogs 1.13
283 TestPause/serial/Start 78.41
284 TestPause/serial/SecondStartNoReconfiguration 25.02
285 TestPause/serial/Pause 1.06
286 TestPause/serial/VerifyStatus 0.37
287 TestPause/serial/Unpause 0.87
288 TestPause/serial/PauseAgain 1.26
289 TestPause/serial/DeletePaused 2.84
290 TestPause/serial/VerifyDeletedResources 0.51
298 TestNetworkPlugins/group/false 4.71
303 TestStartStop/group/old-k8s-version/serial/FirstStart 142.4
304 TestStartStop/group/old-k8s-version/serial/DeployApp 8.7
305 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.89
306 TestStartStop/group/old-k8s-version/serial/Stop 12.19
308 TestStartStop/group/no-preload/serial/FirstStart 70.44
309 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.26
311 TestStartStop/group/no-preload/serial/DeployApp 9.37
312 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.24
313 TestStartStop/group/no-preload/serial/Stop 12.06
314 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
315 TestStartStop/group/no-preload/serial/SecondStart 298.01
316 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
317 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
318 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
319 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.13
320 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
321 TestStartStop/group/old-k8s-version/serial/Pause 3.02
322 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.34
323 TestStartStop/group/no-preload/serial/Pause 3.84
325 TestStartStop/group/embed-certs/serial/FirstStart 89.93
327 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 85.73
328 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.41
329 TestStartStop/group/embed-certs/serial/DeployApp 9.35
330 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.28
331 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.41
332 TestStartStop/group/embed-certs/serial/Stop 12.02
333 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.94
334 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
335 TestStartStop/group/embed-certs/serial/SecondStart 272.39
336 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.32
337 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 298.79
338 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
339 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.11
340 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
341 TestStartStop/group/embed-certs/serial/Pause 3.63
343 TestStartStop/group/newest-cni/serial/FirstStart 46.92
344 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.03
345 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.14
346 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.29
347 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.29
348 TestNetworkPlugins/group/auto/Start 86.17
349 TestStartStop/group/newest-cni/serial/DeployApp 0
350 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.24
351 TestStartStop/group/newest-cni/serial/Stop 1.29
352 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.25
353 TestStartStop/group/newest-cni/serial/SecondStart 20.73
354 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
355 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
356 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.29
357 TestStartStop/group/newest-cni/serial/Pause 2.95
358 TestNetworkPlugins/group/kindnet/Start 85.74
359 TestNetworkPlugins/group/auto/KubeletFlags 0.36
360 TestNetworkPlugins/group/auto/NetCatPod 10.3
361 TestNetworkPlugins/group/auto/DNS 0.2
362 TestNetworkPlugins/group/auto/Localhost 0.16
363 TestNetworkPlugins/group/auto/HairPin 0.19
364 TestNetworkPlugins/group/calico/Start 75.71
365 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
366 TestNetworkPlugins/group/kindnet/KubeletFlags 0.37
367 TestNetworkPlugins/group/kindnet/NetCatPod 12.29
368 TestNetworkPlugins/group/kindnet/DNS 0.23
369 TestNetworkPlugins/group/kindnet/Localhost 0.15
370 TestNetworkPlugins/group/kindnet/HairPin 0.16
371 TestNetworkPlugins/group/custom-flannel/Start 71.04
372 TestNetworkPlugins/group/calico/ControllerPod 6.01
373 TestNetworkPlugins/group/calico/KubeletFlags 0.43
374 TestNetworkPlugins/group/calico/NetCatPod 12.36
375 TestNetworkPlugins/group/calico/DNS 0.25
376 TestNetworkPlugins/group/calico/Localhost 0.16
377 TestNetworkPlugins/group/calico/HairPin 0.19
378 TestNetworkPlugins/group/enable-default-cni/Start 94.34
379 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.37
380 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.34
381 TestNetworkPlugins/group/custom-flannel/DNS 0.26
382 TestNetworkPlugins/group/custom-flannel/Localhost 0.21
383 TestNetworkPlugins/group/custom-flannel/HairPin 0.19
384 TestNetworkPlugins/group/flannel/Start 71.12
385 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.33
386 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.29
387 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
388 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
389 TestNetworkPlugins/group/enable-default-cni/HairPin 0.43
390 TestNetworkPlugins/group/flannel/ControllerPod 6.01
391 TestNetworkPlugins/group/flannel/KubeletFlags 0.38
392 TestNetworkPlugins/group/flannel/NetCatPod 10.39
393 TestNetworkPlugins/group/bridge/Start 86.98
394 TestNetworkPlugins/group/flannel/DNS 0.43
395 TestNetworkPlugins/group/flannel/Localhost 0.25
396 TestNetworkPlugins/group/flannel/HairPin 0.24
397 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
398 TestNetworkPlugins/group/bridge/NetCatPod 10.25
399 TestNetworkPlugins/group/bridge/DNS 0.16
400 TestNetworkPlugins/group/bridge/Localhost 0.14
401 TestNetworkPlugins/group/bridge/HairPin 0.18
x
+
TestDownloadOnly/v1.20.0/json-events (11.51s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-755699 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-755699 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (11.505993611s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (11.51s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-755699
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-755699: exit status 85 (227.135757ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-755699 | jenkins | v1.33.0-beta.0 | 28 Mar 24 21:11 UTC |          |
	|         | -p download-only-755699        |                      |         |                |                     |          |
	|         | --force --alsologtostderr      |                      |         |                |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |                |                     |          |
	|         | --container-runtime=crio       |                      |         |                |                     |          |
	|         | --driver=docker                |                      |         |                |                     |          |
	|         | --container-runtime=crio       |                      |         |                |                     |          |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/28 21:11:55
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0328 21:11:55.232825 1151368 out.go:291] Setting OutFile to fd 1 ...
	I0328 21:11:55.232942 1151368 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 21:11:55.232951 1151368 out.go:304] Setting ErrFile to fd 2...
	I0328 21:11:55.232956 1151368 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 21:11:55.233230 1151368 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17877-1145955/.minikube/bin
	W0328 21:11:55.233354 1151368 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17877-1145955/.minikube/config/config.json: open /home/jenkins/minikube-integration/17877-1145955/.minikube/config/config.json: no such file or directory
	I0328 21:11:55.233744 1151368 out.go:298] Setting JSON to true
	I0328 21:11:55.234580 1151368 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":17666,"bootTime":1711642650,"procs":165,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1056-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0328 21:11:55.234648 1151368 start.go:139] virtualization:  
	I0328 21:11:55.237905 1151368 out.go:97] [download-only-755699] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	I0328 21:11:55.239828 1151368 out.go:169] MINIKUBE_LOCATION=17877
	W0328 21:11:55.238105 1151368 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/17877-1145955/.minikube/cache/preloaded-tarball: no such file or directory
	I0328 21:11:55.238148 1151368 notify.go:220] Checking for updates...
	I0328 21:11:55.243553 1151368 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 21:11:55.245721 1151368 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17877-1145955/kubeconfig
	I0328 21:11:55.247701 1151368 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17877-1145955/.minikube
	I0328 21:11:55.249868 1151368 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0328 21:11:55.254168 1151368 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0328 21:11:55.254437 1151368 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 21:11:55.276994 1151368 docker.go:122] docker version: linux-26.0.0:Docker Engine - Community
	I0328 21:11:55.277101 1151368 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0328 21:11:55.338791 1151368 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-03-28 21:11:55.329756008 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0328 21:11:55.338913 1151368 docker.go:295] overlay module found
	I0328 21:11:55.341033 1151368 out.go:97] Using the docker driver based on user configuration
	I0328 21:11:55.341059 1151368 start.go:297] selected driver: docker
	I0328 21:11:55.341065 1151368 start.go:901] validating driver "docker" against <nil>
	I0328 21:11:55.341164 1151368 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0328 21:11:55.392574 1151368 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-03-28 21:11:55.3826482 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarch
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors
:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0328 21:11:55.392740 1151368 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0328 21:11:55.393011 1151368 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0328 21:11:55.393170 1151368 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0328 21:11:55.395442 1151368 out.go:169] Using Docker driver with root privileges
	I0328 21:11:55.397280 1151368 cni.go:84] Creating CNI manager for ""
	I0328 21:11:55.397304 1151368 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0328 21:11:55.397316 1151368 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0328 21:11:55.397409 1151368 start.go:340] cluster config:
	{Name:download-only-755699 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-755699 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 21:11:55.399710 1151368 out.go:97] Starting "download-only-755699" primary control-plane node in "download-only-755699" cluster
	I0328 21:11:55.399756 1151368 cache.go:121] Beginning downloading kic base image for docker with crio
	I0328 21:11:55.401528 1151368 out.go:97] Pulling base image v0.0.43-1711559786-18485 ...
	I0328 21:11:55.401572 1151368 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0328 21:11:55.401599 1151368 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 in local docker daemon
	I0328 21:11:55.415235 1151368 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 to local cache
	I0328 21:11:55.415431 1151368 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 in local cache directory
	I0328 21:11:55.415528 1151368 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 to local cache
	I0328 21:11:55.476957 1151368 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I0328 21:11:55.476981 1151368 cache.go:56] Caching tarball of preloaded images
	I0328 21:11:55.477139 1151368 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0328 21:11:55.479710 1151368 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0328 21:11:55.479738 1151368 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I0328 21:11:55.602236 1151368 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:59cd2ef07b53f039bfd1761b921f2a02 -> /home/jenkins/minikube-integration/17877-1145955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I0328 21:12:00.710915 1151368 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 as a tarball
	I0328 21:12:02.775005 1151368 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I0328 21:12:02.775110 1151368 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/17877-1145955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I0328 21:12:03.889944 1151368 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0328 21:12:03.890311 1151368 profile.go:143] Saving config to /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/download-only-755699/config.json ...
	I0328 21:12:03.890345 1151368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/download-only-755699/config.json: {Name:mk01c60a313fafba6f3cc6b8aa60293f3aeaa4c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 21:12:03.891001 1151368 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0328 21:12:03.891209 1151368 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/17877-1145955/.minikube/cache/linux/arm64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-755699 host does not exist
	  To start a cluster, run: "minikube start -p download-only-755699"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-755699
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/json-events (7.73s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-856904 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-856904 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=crio --driver=docker  --container-runtime=crio: (7.727109593s)
--- PASS: TestDownloadOnly/v1.29.3/json-events (7.73s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/preload-exists
--- PASS: TestDownloadOnly/v1.29.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-856904
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-856904: exit status 85 (87.842906ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-755699 | jenkins | v1.33.0-beta.0 | 28 Mar 24 21:11 UTC |                     |
	|         | -p download-only-755699        |                      |         |                |                     |                     |
	|         | --force --alsologtostderr      |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |                |                     |                     |
	|         | --container-runtime=crio       |                      |         |                |                     |                     |
	|         | --driver=docker                |                      |         |                |                     |                     |
	|         | --container-runtime=crio       |                      |         |                |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.0-beta.0 | 28 Mar 24 21:12 UTC | 28 Mar 24 21:12 UTC |
	| delete  | -p download-only-755699        | download-only-755699 | jenkins | v1.33.0-beta.0 | 28 Mar 24 21:12 UTC | 28 Mar 24 21:12 UTC |
	| start   | -o=json --download-only        | download-only-856904 | jenkins | v1.33.0-beta.0 | 28 Mar 24 21:12 UTC |                     |
	|         | -p download-only-856904        |                      |         |                |                     |                     |
	|         | --force --alsologtostderr      |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3   |                      |         |                |                     |                     |
	|         | --container-runtime=crio       |                      |         |                |                     |                     |
	|         | --driver=docker                |                      |         |                |                     |                     |
	|         | --container-runtime=crio       |                      |         |                |                     |                     |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/28 21:12:07
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0328 21:12:07.293409 1151534 out.go:291] Setting OutFile to fd 1 ...
	I0328 21:12:07.293541 1151534 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 21:12:07.293552 1151534 out.go:304] Setting ErrFile to fd 2...
	I0328 21:12:07.293557 1151534 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 21:12:07.293812 1151534 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17877-1145955/.minikube/bin
	I0328 21:12:07.294224 1151534 out.go:298] Setting JSON to true
	I0328 21:12:07.295047 1151534 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":17678,"bootTime":1711642650,"procs":163,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1056-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0328 21:12:07.295120 1151534 start.go:139] virtualization:  
	I0328 21:12:07.297737 1151534 out.go:97] [download-only-856904] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	I0328 21:12:07.299879 1151534 out.go:169] MINIKUBE_LOCATION=17877
	I0328 21:12:07.297925 1151534 notify.go:220] Checking for updates...
	I0328 21:12:07.303798 1151534 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 21:12:07.306057 1151534 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17877-1145955/kubeconfig
	I0328 21:12:07.307872 1151534 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17877-1145955/.minikube
	I0328 21:12:07.309509 1151534 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0328 21:12:07.312955 1151534 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0328 21:12:07.313211 1151534 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 21:12:07.332241 1151534 docker.go:122] docker version: linux-26.0.0:Docker Engine - Community
	I0328 21:12:07.332358 1151534 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0328 21:12:07.394110 1151534 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-03-28 21:12:07.384032295 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0328 21:12:07.394210 1151534 docker.go:295] overlay module found
	I0328 21:12:07.396244 1151534 out.go:97] Using the docker driver based on user configuration
	I0328 21:12:07.396268 1151534 start.go:297] selected driver: docker
	I0328 21:12:07.396276 1151534 start.go:901] validating driver "docker" against <nil>
	I0328 21:12:07.396387 1151534 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0328 21:12:07.448513 1151534 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-03-28 21:12:07.439621224 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0328 21:12:07.448690 1151534 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0328 21:12:07.449005 1151534 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0328 21:12:07.449176 1151534 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0328 21:12:07.451650 1151534 out.go:169] Using Docker driver with root privileges
	I0328 21:12:07.453763 1151534 cni.go:84] Creating CNI manager for ""
	I0328 21:12:07.453788 1151534 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0328 21:12:07.453798 1151534 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0328 21:12:07.453899 1151534 start.go:340] cluster config:
	{Name:download-only-856904 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:download-only-856904 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 21:12:07.455963 1151534 out.go:97] Starting "download-only-856904" primary control-plane node in "download-only-856904" cluster
	I0328 21:12:07.455998 1151534 cache.go:121] Beginning downloading kic base image for docker with crio
	I0328 21:12:07.458573 1151534 out.go:97] Pulling base image v0.0.43-1711559786-18485 ...
	I0328 21:12:07.458628 1151534 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0328 21:12:07.458728 1151534 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 in local docker daemon
	I0328 21:12:07.475601 1151534 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 to local cache
	I0328 21:12:07.475755 1151534 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 in local cache directory
	I0328 21:12:07.475781 1151534 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 in local cache directory, skipping pull
	I0328 21:12:07.475792 1151534 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 exists in cache, skipping pull
	I0328 21:12:07.475801 1151534 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 as a tarball
	I0328 21:12:07.537477 1151534 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.3/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-arm64.tar.lz4
	I0328 21:12:07.537507 1151534 cache.go:56] Caching tarball of preloaded images
	I0328 21:12:07.538197 1151534 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0328 21:12:07.540685 1151534 out.go:97] Downloading Kubernetes v1.29.3 preload ...
	I0328 21:12:07.540738 1151534 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-arm64.tar.lz4 ...
	I0328 21:12:07.654173 1151534 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.3/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-arm64.tar.lz4?checksum=md5:84fdcab7b9f3aeb3e0da1cc4f5f14b7b -> /home/jenkins/minikube-integration/17877-1145955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-856904 host does not exist
	  To start a cluster, run: "minikube start -p download-only-856904"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.3/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.29.3/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-856904
--- PASS: TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/json-events (10.75s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-198145 --force --alsologtostderr --kubernetes-version=v1.30.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-198145 --force --alsologtostderr --kubernetes-version=v1.30.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (10.746111666s)
--- PASS: TestDownloadOnly/v1.30.0-beta.0/json-events (10.75s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.30.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-198145
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-198145: exit status 85 (79.814528ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-755699 | jenkins | v1.33.0-beta.0 | 28 Mar 24 21:11 UTC |                     |
	|         | -p download-only-755699             |                      |         |                |                     |                     |
	|         | --force --alsologtostderr           |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |                |                     |                     |
	|         | --container-runtime=crio            |                      |         |                |                     |                     |
	|         | --driver=docker                     |                      |         |                |                     |                     |
	|         | --container-runtime=crio            |                      |         |                |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.0-beta.0 | 28 Mar 24 21:12 UTC | 28 Mar 24 21:12 UTC |
	| delete  | -p download-only-755699             | download-only-755699 | jenkins | v1.33.0-beta.0 | 28 Mar 24 21:12 UTC | 28 Mar 24 21:12 UTC |
	| start   | -o=json --download-only             | download-only-856904 | jenkins | v1.33.0-beta.0 | 28 Mar 24 21:12 UTC |                     |
	|         | -p download-only-856904             |                      |         |                |                     |                     |
	|         | --force --alsologtostderr           |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3        |                      |         |                |                     |                     |
	|         | --container-runtime=crio            |                      |         |                |                     |                     |
	|         | --driver=docker                     |                      |         |                |                     |                     |
	|         | --container-runtime=crio            |                      |         |                |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.0-beta.0 | 28 Mar 24 21:12 UTC | 28 Mar 24 21:12 UTC |
	| delete  | -p download-only-856904             | download-only-856904 | jenkins | v1.33.0-beta.0 | 28 Mar 24 21:12 UTC | 28 Mar 24 21:12 UTC |
	| start   | -o=json --download-only             | download-only-198145 | jenkins | v1.33.0-beta.0 | 28 Mar 24 21:12 UTC |                     |
	|         | -p download-only-198145             |                      |         |                |                     |                     |
	|         | --force --alsologtostderr           |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0 |                      |         |                |                     |                     |
	|         | --container-runtime=crio            |                      |         |                |                     |                     |
	|         | --driver=docker                     |                      |         |                |                     |                     |
	|         | --container-runtime=crio            |                      |         |                |                     |                     |
	|---------|-------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/28 21:12:15
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0328 21:12:15.454880 1151700 out.go:291] Setting OutFile to fd 1 ...
	I0328 21:12:15.455052 1151700 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 21:12:15.455064 1151700 out.go:304] Setting ErrFile to fd 2...
	I0328 21:12:15.455069 1151700 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 21:12:15.455328 1151700 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17877-1145955/.minikube/bin
	I0328 21:12:15.455735 1151700 out.go:298] Setting JSON to true
	I0328 21:12:15.456609 1151700 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":17686,"bootTime":1711642650,"procs":162,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1056-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0328 21:12:15.456682 1151700 start.go:139] virtualization:  
	I0328 21:12:15.459533 1151700 out.go:97] [download-only-198145] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	I0328 21:12:15.461536 1151700 out.go:169] MINIKUBE_LOCATION=17877
	I0328 21:12:15.459721 1151700 notify.go:220] Checking for updates...
	I0328 21:12:15.465422 1151700 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 21:12:15.467447 1151700 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17877-1145955/kubeconfig
	I0328 21:12:15.469089 1151700 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17877-1145955/.minikube
	I0328 21:12:15.471059 1151700 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0328 21:12:15.475190 1151700 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0328 21:12:15.475494 1151700 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 21:12:15.494207 1151700 docker.go:122] docker version: linux-26.0.0:Docker Engine - Community
	I0328 21:12:15.494321 1151700 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0328 21:12:15.569385 1151700 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-03-28 21:12:15.559703679 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0328 21:12:15.569501 1151700 docker.go:295] overlay module found
	I0328 21:12:15.571744 1151700 out.go:97] Using the docker driver based on user configuration
	I0328 21:12:15.571777 1151700 start.go:297] selected driver: docker
	I0328 21:12:15.571785 1151700 start.go:901] validating driver "docker" against <nil>
	I0328 21:12:15.571881 1151700 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0328 21:12:15.622665 1151700 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-03-28 21:12:15.614025573 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0328 21:12:15.622831 1151700 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0328 21:12:15.623093 1151700 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0328 21:12:15.623258 1151700 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0328 21:12:15.625511 1151700 out.go:169] Using Docker driver with root privileges
	I0328 21:12:15.627571 1151700 cni.go:84] Creating CNI manager for ""
	I0328 21:12:15.627599 1151700 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0328 21:12:15.627612 1151700 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0328 21:12:15.627709 1151700 start.go:340] cluster config:
	{Name:download-only-198145 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-beta.0 ClusterName:download-only-198145 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 21:12:15.629890 1151700 out.go:97] Starting "download-only-198145" primary control-plane node in "download-only-198145" cluster
	I0328 21:12:15.629917 1151700 cache.go:121] Beginning downloading kic base image for docker with crio
	I0328 21:12:15.631937 1151700 out.go:97] Pulling base image v0.0.43-1711559786-18485 ...
	I0328 21:12:15.631961 1151700 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime crio
	I0328 21:12:15.632061 1151700 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 in local docker daemon
	I0328 21:12:15.644865 1151700 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 to local cache
	I0328 21:12:15.645006 1151700 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 in local cache directory
	I0328 21:12:15.645025 1151700 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 in local cache directory, skipping pull
	I0328 21:12:15.645030 1151700 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 exists in cache, skipping pull
	I0328 21:12:15.645038 1151700 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 as a tarball
	I0328 21:12:15.700157 1151700 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-beta.0/preloaded-images-k8s-v18-v1.30.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I0328 21:12:15.700183 1151700 cache.go:56] Caching tarball of preloaded images
	I0328 21:12:15.700362 1151700 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime crio
	I0328 21:12:15.702620 1151700 out.go:97] Downloading Kubernetes v1.30.0-beta.0 preload ...
	I0328 21:12:15.702642 1151700 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-beta.0-cri-o-overlay-arm64.tar.lz4 ...
	I0328 21:12:15.811509 1151700 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-beta.0/preloaded-images-k8s-v18-v1.30.0-beta.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:869a9f80cd246e74d899316f2e05b887 -> /home/jenkins/minikube-integration/17877-1145955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I0328 21:12:21.736126 1151700 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.30.0-beta.0-cri-o-overlay-arm64.tar.lz4 ...
	I0328 21:12:21.736230 1151700 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/17877-1145955/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-beta.0-cri-o-overlay-arm64.tar.lz4 ...
	I0328 21:12:22.591300 1151700 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-beta.0 on crio
	I0328 21:12:22.591681 1151700 profile.go:143] Saving config to /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/download-only-198145/config.json ...
	I0328 21:12:22.591718 1151700 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/download-only-198145/config.json: {Name:mk411af04cc90ae6c27693fe446dd8df943f84f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 21:12:22.592568 1151700 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime crio
	I0328 21:12:22.593049 1151700 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0-beta.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0-beta.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/17877-1145955/.minikube/cache/linux/arm64/v1.30.0-beta.0/kubectl
	
	
	* The control-plane node download-only-198145 host does not exist
	  To start a cluster, run: "minikube start -p download-only-198145"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0-beta.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.0-beta.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-198145
--- PASS: TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.53s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-668108 --alsologtostderr --binary-mirror http://127.0.0.1:45097 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-668108" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-668108
--- PASS: TestBinaryMirror (0.53s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-564371
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-564371: exit status 85 (89.358219ms)

                                                
                                                
-- stdout --
	* Profile "addons-564371" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-564371"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.1s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-564371
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-564371: exit status 85 (97.175686ms)

                                                
                                                
-- stdout --
	* Profile "addons-564371" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-564371"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.10s)

                                                
                                    
x
+
TestAddons/Setup (171.52s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-arm64 start -p addons-564371 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Done: out/minikube-linux-arm64 start -p addons-564371 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns: (2m51.51630985s)
--- PASS: TestAddons/Setup (171.52s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.42s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 38.812777ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-xs99m" [d5859bb3-d004-4e14-b8bd-94a73c9673a1] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004642246s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-v2zfr" [37b97352-e798-4a32-a0d4-3808ead8f4b0] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005596167s
addons_test.go:340: (dbg) Run:  kubectl --context addons-564371 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-564371 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-564371 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.296533808s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-arm64 -p addons-564371 ip
2024/03/28 21:15:35 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-arm64 -p addons-564371 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.42s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.79s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-2btx6" [66740386-01bd-4cdd-99b1-4da7043e3d70] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003970487s
addons_test.go:841: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-564371
addons_test.go:841: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-564371: (5.789866495s)
--- PASS: TestAddons/parallel/InspektorGadget (11.79s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.8s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 5.109916ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-69cf46c98-vf465" [f04170f8-a288-4282-a0df-90db24d0b88e] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004486539s
addons_test.go:415: (dbg) Run:  kubectl --context addons-564371 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-arm64 -p addons-564371 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.80s)

                                                
                                    
x
+
TestAddons/parallel/CSI (59.85s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 38.587958ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-564371 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-564371 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-564371 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-564371 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-564371 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-564371 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-564371 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-564371 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-564371 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-564371 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-564371 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-564371 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-564371 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-564371 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-564371 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-564371 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-564371 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-564371 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-564371 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-564371 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-564371 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-564371 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-564371 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [9c7b614f-f93d-4ffa-9adf-738ab7b42278] Pending
helpers_test.go:344: "task-pv-pod" [9c7b614f-f93d-4ffa-9adf-738ab7b42278] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [9c7b614f-f93d-4ffa-9adf-738ab7b42278] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.004049276s
addons_test.go:584: (dbg) Run:  kubectl --context addons-564371 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-564371 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-564371 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-564371 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-564371 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-564371 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-564371 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-564371 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-564371 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-564371 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-564371 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-564371 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-564371 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-564371 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-564371 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-564371 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-564371 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-564371 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-564371 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-564371 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [e0618ad6-a820-4f01-928f-4de58e0f8f3e] Pending
helpers_test.go:344: "task-pv-pod-restore" [e0618ad6-a820-4f01-928f-4de58e0f8f3e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [e0618ad6-a820-4f01-928f-4de58e0f8f3e] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003523667s
addons_test.go:626: (dbg) Run:  kubectl --context addons-564371 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-564371 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-564371 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-arm64 -p addons-564371 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-arm64 -p addons-564371 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.777988264s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-arm64 -p addons-564371 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (59.85s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.03s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-564371 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-564371 --alsologtostderr -v=1: (1.022915776s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5b77dbd7c4-ssw5x" [a53afdb0-e463-4a5a-bc51-09799ae8238f] Pending
helpers_test.go:344: "headlamp-5b77dbd7c4-ssw5x" [a53afdb0-e463-4a5a-bc51-09799ae8238f] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5b77dbd7c4-ssw5x" [a53afdb0-e463-4a5a-bc51-09799ae8238f] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.00387779s
--- PASS: TestAddons/parallel/Headlamp (11.03s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.63s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5446596998-qrvm7" [da1f633b-0d7f-4873-a3e4-3f7b7c810930] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.005638524s
addons_test.go:860: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-564371
--- PASS: TestAddons/parallel/CloudSpanner (6.63s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (9.56s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-564371 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-564371 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-564371 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-564371 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-564371 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-564371 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-564371 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [9bf53904-7a6c-45be-afdd-8d29c784f3c1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [9bf53904-7a6c-45be-afdd-8d29c784f3c1] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [9bf53904-7a6c-45be-afdd-8d29c784f3c1] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.006702s
addons_test.go:891: (dbg) Run:  kubectl --context addons-564371 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-arm64 -p addons-564371 ssh "cat /opt/local-path-provisioner/pvc-001a98eb-32e3-4d5c-9dcb-b90328f56941_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-564371 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-564371 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-arm64 -p addons-564371 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (9.56s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.54s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-98h7b" [a89ce652-ae4e-4723-8514-ba6a7a219889] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.00600914s
addons_test.go:955: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-564371
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.54s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-4qqbw" [0c5c2aa6-84da-4ba3-9c8f-72ae6bf055cb] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004987134s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.21s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-564371 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-564371 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.21s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.36s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-564371
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-564371: (12.048333764s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-564371
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-564371
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-564371
--- PASS: TestAddons/StoppedEnableDisable (12.36s)

                                                
                                    
x
+
TestCertOptions (34.11s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-487813 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
E0328 22:03:15.901654 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/functional-351339/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-487813 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (31.458682596s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-487813 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-487813 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-487813 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-487813" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-487813
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-487813: (1.992893965s)
--- PASS: TestCertOptions (34.11s)

                                                
                                    
x
+
TestCertExpiration (239.45s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-478493 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-478493 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (39.382902688s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-478493 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-478493 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (17.781158564s)
helpers_test.go:175: Cleaning up "cert-expiration-478493" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-478493
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-478493: (2.287286875s)
--- PASS: TestCertExpiration (239.45s)

                                                
                                    
x
+
TestForceSystemdFlag (40.84s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-331080 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-331080 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (38.125806953s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-331080 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-331080" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-331080
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-331080: (2.408940517s)
--- PASS: TestForceSystemdFlag (40.84s)

                                                
                                    
x
+
TestForceSystemdEnv (45.78s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-978460 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-978460 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (43.146764607s)
helpers_test.go:175: Cleaning up "force-systemd-env-978460" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-978460
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-978460: (2.634493302s)
--- PASS: TestForceSystemdEnv (45.78s)

                                                
                                    
x
+
TestErrorSpam/setup (34.01s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-417632 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-417632 --driver=docker  --container-runtime=crio
E0328 21:20:19.659637 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/addons-564371/client.crt: no such file or directory
E0328 21:20:19.666358 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/addons-564371/client.crt: no such file or directory
E0328 21:20:19.676613 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/addons-564371/client.crt: no such file or directory
E0328 21:20:19.696856 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/addons-564371/client.crt: no such file or directory
E0328 21:20:19.737067 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/addons-564371/client.crt: no such file or directory
E0328 21:20:19.817340 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/addons-564371/client.crt: no such file or directory
E0328 21:20:19.977825 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/addons-564371/client.crt: no such file or directory
E0328 21:20:20.298351 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/addons-564371/client.crt: no such file or directory
E0328 21:20:20.938885 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/addons-564371/client.crt: no such file or directory
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-417632 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-417632 --driver=docker  --container-runtime=crio: (34.007053288s)
--- PASS: TestErrorSpam/setup (34.01s)

                                                
                                    
x
+
TestErrorSpam/start (0.76s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-417632 --log_dir /tmp/nospam-417632 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-417632 --log_dir /tmp/nospam-417632 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-417632 --log_dir /tmp/nospam-417632 start --dry-run
--- PASS: TestErrorSpam/start (0.76s)

                                                
                                    
x
+
TestErrorSpam/status (0.96s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-417632 --log_dir /tmp/nospam-417632 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-417632 --log_dir /tmp/nospam-417632 status
E0328 21:20:22.219083 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/addons-564371/client.crt: no such file or directory
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-417632 --log_dir /tmp/nospam-417632 status
--- PASS: TestErrorSpam/status (0.96s)

                                                
                                    
x
+
TestErrorSpam/pause (1.72s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-417632 --log_dir /tmp/nospam-417632 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-417632 --log_dir /tmp/nospam-417632 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-417632 --log_dir /tmp/nospam-417632 pause
--- PASS: TestErrorSpam/pause (1.72s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.78s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-417632 --log_dir /tmp/nospam-417632 unpause
E0328 21:20:24.779626 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/addons-564371/client.crt: no such file or directory
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-417632 --log_dir /tmp/nospam-417632 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-417632 --log_dir /tmp/nospam-417632 unpause
--- PASS: TestErrorSpam/unpause (1.78s)

                                                
                                    
x
+
TestErrorSpam/stop (1.44s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-417632 --log_dir /tmp/nospam-417632 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-417632 --log_dir /tmp/nospam-417632 stop: (1.231783695s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-417632 --log_dir /tmp/nospam-417632 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-417632 --log_dir /tmp/nospam-417632 stop
--- PASS: TestErrorSpam/stop (1.44s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17877-1145955/.minikube/files/etc/test/nested/copy/1151363/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (77.54s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-351339 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E0328 21:20:40.141932 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/addons-564371/client.crt: no such file or directory
E0328 21:21:00.622147 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/addons-564371/client.crt: no such file or directory
E0328 21:21:41.582390 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/addons-564371/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-351339 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m17.540269749s)
--- PASS: TestFunctional/serial/StartWithProxy (77.54s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (29.41s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-351339 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-351339 --alsologtostderr -v=8: (29.404282563s)
functional_test.go:659: soft start took 29.409379539s for "functional-351339" cluster.
--- PASS: TestFunctional/serial/SoftStart (29.41s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-351339 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.66s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-351339 cache add registry.k8s.io/pause:3.1: (1.265738907s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-351339 cache add registry.k8s.io/pause:3.3: (1.203062415s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-351339 cache add registry.k8s.io/pause:latest: (1.187715209s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.66s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-351339 /tmp/TestFunctionalserialCacheCmdcacheadd_local1609567580/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 cache add minikube-local-cache-test:functional-351339
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 cache delete minikube-local-cache-test:functional-351339
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-351339
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.95s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-351339 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (312.799091ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.95s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 kubectl -- --context functional-351339 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-351339 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.17s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (40.33s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-351339 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0328 21:23:03.503341 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/addons-564371/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-351339 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.327045209s)
functional_test.go:757: restart took 40.327148535s for "functional-351339" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (40.33s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-351339 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.68s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-351339 logs: (1.67731472s)
--- PASS: TestFunctional/serial/LogsCmd (1.68s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.71s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 logs --file /tmp/TestFunctionalserialLogsFileCmd3679026958/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-351339 logs --file /tmp/TestFunctionalserialLogsFileCmd3679026958/001/logs.txt: (1.711091222s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.71s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.34s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-351339 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-351339
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-351339: exit status 115 (614.071151ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31887 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-351339 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.34s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-351339 config get cpus: exit status 14 (87.436749ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-351339 config get cpus: exit status 14 (107.882129ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-351339 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-351339 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1176729: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.35s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-351339 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-351339 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (183.90493ms)

                                                
                                                
-- stdout --
	* [functional-351339] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17877
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17877-1145955/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17877-1145955/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 21:23:47.939039 1176331 out.go:291] Setting OutFile to fd 1 ...
	I0328 21:23:47.939175 1176331 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 21:23:47.939182 1176331 out.go:304] Setting ErrFile to fd 2...
	I0328 21:23:47.939186 1176331 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 21:23:47.939440 1176331 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17877-1145955/.minikube/bin
	I0328 21:23:47.939791 1176331 out.go:298] Setting JSON to false
	I0328 21:23:47.940742 1176331 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":18378,"bootTime":1711642650,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1056-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0328 21:23:47.940823 1176331 start.go:139] virtualization:  
	I0328 21:23:47.943641 1176331 out.go:177] * [functional-351339] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	I0328 21:23:47.946541 1176331 out.go:177]   - MINIKUBE_LOCATION=17877
	I0328 21:23:47.948266 1176331 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 21:23:47.946612 1176331 notify.go:220] Checking for updates...
	I0328 21:23:47.952001 1176331 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17877-1145955/kubeconfig
	I0328 21:23:47.953672 1176331 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17877-1145955/.minikube
	I0328 21:23:47.955755 1176331 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0328 21:23:47.957529 1176331 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 21:23:47.960137 1176331 config.go:182] Loaded profile config "functional-351339": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 21:23:47.960701 1176331 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 21:23:47.981377 1176331 docker.go:122] docker version: linux-26.0.0:Docker Engine - Community
	I0328 21:23:47.981492 1176331 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0328 21:23:48.051469 1176331 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:53 SystemTime:2024-03-28 21:23:48.040004999 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0328 21:23:48.051610 1176331 docker.go:295] overlay module found
	I0328 21:23:48.054373 1176331 out.go:177] * Using the docker driver based on existing profile
	I0328 21:23:48.056622 1176331 start.go:297] selected driver: docker
	I0328 21:23:48.056643 1176331 start.go:901] validating driver "docker" against &{Name:functional-351339 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functional-351339 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 21:23:48.056762 1176331 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 21:23:48.059306 1176331 out.go:177] 
	W0328 21:23:48.060982 1176331 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0328 21:23:48.062948 1176331 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-351339 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-351339 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-351339 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (193.171774ms)

                                                
                                                
-- stdout --
	* [functional-351339] minikube v1.33.0-beta.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17877
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17877-1145955/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17877-1145955/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 21:23:47.750039 1176291 out.go:291] Setting OutFile to fd 1 ...
	I0328 21:23:47.750244 1176291 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 21:23:47.750274 1176291 out.go:304] Setting ErrFile to fd 2...
	I0328 21:23:47.750296 1176291 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 21:23:47.751635 1176291 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17877-1145955/.minikube/bin
	I0328 21:23:47.752081 1176291 out.go:298] Setting JSON to false
	I0328 21:23:47.753101 1176291 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":18378,"bootTime":1711642650,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1056-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0328 21:23:47.753209 1176291 start.go:139] virtualization:  
	I0328 21:23:47.756485 1176291 out.go:177] * [functional-351339] minikube v1.33.0-beta.0 sur Ubuntu 20.04 (arm64)
	I0328 21:23:47.759095 1176291 out.go:177]   - MINIKUBE_LOCATION=17877
	I0328 21:23:47.760768 1176291 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 21:23:47.759180 1176291 notify.go:220] Checking for updates...
	I0328 21:23:47.764385 1176291 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17877-1145955/kubeconfig
	I0328 21:23:47.766266 1176291 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17877-1145955/.minikube
	I0328 21:23:47.767974 1176291 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0328 21:23:47.769794 1176291 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 21:23:47.772191 1176291 config.go:182] Loaded profile config "functional-351339": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 21:23:47.772714 1176291 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 21:23:47.797050 1176291 docker.go:122] docker version: linux-26.0.0:Docker Engine - Community
	I0328 21:23:47.797190 1176291 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0328 21:23:47.866062 1176291 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:53 SystemTime:2024-03-28 21:23:47.856371804 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0328 21:23:47.866169 1176291 docker.go:295] overlay module found
	I0328 21:23:47.868686 1176291 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0328 21:23:47.870440 1176291 start.go:297] selected driver: docker
	I0328 21:23:47.870458 1176291 start.go:901] validating driver "docker" against &{Name:functional-351339 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functional-351339 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 21:23:47.870570 1176291 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 21:23:47.874139 1176291 out.go:177] 
	W0328 21:23:47.876210 1176291 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0328 21:23:47.878190 1176291 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-351339 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-351339 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-455nk" [2c1c4726-9dde-4522-ac3e-a7a10abd0cd3] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-455nk" [2c1c4726-9dde-4522-ac3e-a7a10abd0cd3] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.003792167s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.49.2:30615
functional_test.go:1671: http://192.168.49.2:30615: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-7799dfb7c6-455nk

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30615
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.69s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [2c13e38f-77ba-4d94-be96-0762d1921d06] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004372464s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-351339 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-351339 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-351339 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-351339 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [7f64b785-62b0-49f9-8765-74a344c3b7dc] Pending
helpers_test.go:344: "sp-pod" [7f64b785-62b0-49f9-8765-74a344c3b7dc] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [7f64b785-62b0-49f9-8765-74a344c3b7dc] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.004016796s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-351339 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-351339 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-351339 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [c0c487e5-916c-48bb-8496-b42e659e0ccb] Pending
helpers_test.go:344: "sp-pod" [c0c487e5-916c-48bb-8496-b42e659e0ccb] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003730164s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-351339 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.82s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 ssh -n functional-351339 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 cp functional-351339:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd674252282/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 ssh -n functional-351339 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 ssh -n functional-351339 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.24s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1151363/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 ssh "sudo cat /etc/test/nested/copy/1151363/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1151363.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 ssh "sudo cat /etc/ssl/certs/1151363.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1151363.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 ssh "sudo cat /usr/share/ca-certificates/1151363.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/11513632.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 ssh "sudo cat /etc/ssl/certs/11513632.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/11513632.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 ssh "sudo cat /usr/share/ca-certificates/11513632.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.54s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-351339 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-351339 ssh "sudo systemctl is-active docker": exit status 1 (372.297486ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-351339 ssh "sudo systemctl is-active containerd": exit status 1 (331.918508ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-351339 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-351339 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-351339 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1174325: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-351339 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-351339 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-351339 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [ced188b0-c336-4ffa-b861-3cf2cfe95ce8] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [ced188b0-c336-4ffa-b861-3cf2cfe95ce8] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.004506542s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.36s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-351339 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.108.108.117 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-351339 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-351339 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-351339 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-w7p5v" [4bc59a24-9007-4174-9f5a-353dc3b72c18] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-w7p5v" [4bc59a24-9007-4174-9f5a-353dc3b72c18] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.003975413s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1311: Took "365.509953ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1325: Took "68.725001ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1362: Took "321.980242ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1375: Took "54.706287ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-351339 /tmp/TestFunctionalparallelMountCmdany-port3835003640/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1711661022978854316" to /tmp/TestFunctionalparallelMountCmdany-port3835003640/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1711661022978854316" to /tmp/TestFunctionalparallelMountCmdany-port3835003640/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1711661022978854316" to /tmp/TestFunctionalparallelMountCmdany-port3835003640/001/test-1711661022978854316
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-351339 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (345.084436ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Mar 28 21:23 created-by-test
-rw-r--r-- 1 docker docker 24 Mar 28 21:23 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Mar 28 21:23 test-1711661022978854316
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 ssh cat /mount-9p/test-1711661022978854316
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-351339 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [15af07c5-ca8f-4d4a-a4d1-c9a22580952d] Pending
helpers_test.go:344: "busybox-mount" [15af07c5-ca8f-4d4a-a4d1-c9a22580952d] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [15af07c5-ca8f-4d4a-a4d1-c9a22580952d] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [15af07c5-ca8f-4d4a-a4d1-c9a22580952d] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.004016342s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-351339 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-351339 /tmp/TestFunctionalparallelMountCmdany-port3835003640/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 service list -o json
functional_test.go:1490: Took "582.220865ms" to run "out/minikube-linux-arm64 -p functional-351339 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.49.2:32566
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.49.2:32566
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-351339 /tmp/TestFunctionalparallelMountCmdspecific-port4240261412/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-351339 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (503.464338ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-351339 /tmp/TestFunctionalparallelMountCmdspecific-port4240261412/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-351339 ssh "sudo umount -f /mount-9p": exit status 1 (382.69673ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-351339 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-351339 /tmp/TestFunctionalparallelMountCmdspecific-port4240261412/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.27s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-351339 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3342090063/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-351339 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3342090063/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-351339 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3342090063/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-351339 ssh "findmnt -T" /mount1: exit status 1 (621.361255ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-351339 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-351339 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3342090063/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-351339 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3342090063/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-351339 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3342090063/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.29s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 version --short
--- PASS: TestFunctional/parallel/Version/short (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-arm64 -p functional-351339 version -o=json --components: (1.207984347s)
--- PASS: TestFunctional/parallel/Version/components (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-351339 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.29.3
registry.k8s.io/kube-proxy:v1.29.3
registry.k8s.io/kube-controller-manager:v1.29.3
registry.k8s.io/kube-apiserver:v1.29.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-351339
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20240202-8f1494ea
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-351339 image ls --format short --alsologtostderr:
I0328 21:24:16.381685 1179191 out.go:291] Setting OutFile to fd 1 ...
I0328 21:24:16.381882 1179191 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0328 21:24:16.381894 1179191 out.go:304] Setting ErrFile to fd 2...
I0328 21:24:16.381916 1179191 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0328 21:24:16.382203 1179191 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17877-1145955/.minikube/bin
I0328 21:24:16.382882 1179191 config.go:182] Loaded profile config "functional-351339": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0328 21:24:16.383041 1179191 config.go:182] Loaded profile config "functional-351339": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0328 21:24:16.383585 1179191 cli_runner.go:164] Run: docker container inspect functional-351339 --format={{.State.Status}}
I0328 21:24:16.399373 1179191 ssh_runner.go:195] Run: systemctl --version
I0328 21:24:16.399475 1179191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-351339
I0328 21:24:16.416970 1179191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34273 SSHKeyPath:/home/jenkins/minikube-integration/17877-1145955/.minikube/machines/functional-351339/id_rsa Username:docker}
I0328 21:24:16.512488 1179191 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-351339 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-scheduler          | v1.29.3            | 4b51f9f6bc9b9 | 59.2MB |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/etcd                    | 3.5.12-0           | 014faa467e297 | 140MB  |
| registry.k8s.io/kube-controller-manager | v1.29.3            | 121d70d9a3805 | 119MB  |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| registry.k8s.io/pause                   | 3.9                | 829e9de338bd5 | 520kB  |
| docker.io/kindest/kindnetd              | v20240202-8f1494ea | 4740c1948d3fc | 60.9MB |
| gcr.io/google-containers/addon-resizer  | functional-351339  | ffd4cfbbe753e | 34.1MB |
| docker.io/library/nginx                 | alpine             | b8c82647e8a25 | 45.4MB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | 2437cf7621777 | 58.8MB |
| registry.k8s.io/kube-proxy              | v1.29.3            | 0e9b4a0d1e86d | 86.8MB |
| docker.io/library/nginx                 | latest             | 070027a3cbe09 | 196MB  |
| registry.k8s.io/kube-apiserver          | v1.29.3            | 2581114f5709d | 124MB  |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-351339 image ls --format table --alsologtostderr:
I0328 21:24:17.009292 1179326 out.go:291] Setting OutFile to fd 1 ...
I0328 21:24:17.009440 1179326 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0328 21:24:17.009450 1179326 out.go:304] Setting ErrFile to fd 2...
I0328 21:24:17.009456 1179326 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0328 21:24:17.009708 1179326 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17877-1145955/.minikube/bin
I0328 21:24:17.010340 1179326 config.go:182] Loaded profile config "functional-351339": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0328 21:24:17.010465 1179326 config.go:182] Loaded profile config "functional-351339": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0328 21:24:17.010949 1179326 cli_runner.go:164] Run: docker container inspect functional-351339 --format={{.State.Status}}
I0328 21:24:17.026937 1179326 ssh_runner.go:195] Run: systemctl --version
I0328 21:24:17.026992 1179326 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-351339
I0328 21:24:17.046177 1179326 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34273 SSHKeyPath:/home/jenkins/minikube-integration/17877-1145955/.minikube/machines/functional-351339/id_rsa Username:docker}
I0328 21:24:17.153109 1179326 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-351339 image ls --format json --alsologtostderr:
[{"id":"b8c82647e8a2586145e422943ae4c69c9b1600db636e1269efd256360eb396b0","repoDigests":["docker.io/library/nginx@sha256:31bad00311cb5eeb8a6648beadcf67277a175da89989f14727420a80e2e76742","docker.io/library/nginx@sha256:fe6e879bfe52091d423aa46efab8899ee4da7fdc7ed7baa558dcabf3823eb0d7"],"repoTags":["docker.io/library/nginx:alpine"],"size":"45393258"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1d
ddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd","repoDigests":["registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b","registry.k8s.io/etcd@sha256:675d0e055ad04d9d6227bbb1aa88626bb1903a8b9b177c0353c6e1b3112952ad"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"140414767"},{"id":"4b51f9f6bc9b9a68473278361df0e8985109b56c7b649532c6bffcab2a8c65fb","repoDigests":["registry.k8s.io/kube-scheduler@sha256:107cad99dfbfbb6192d7cb685fc7702c9798cffb3fd63551fd00ae0009cf4612","registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a"],"repoTags":["registry.
k8s.io/kube-scheduler:v1.29.3"],"size":"59175732"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"2581114f5709d3459ca39f243fd21fde75f2f60d205ffdcd57b4207c33980794","repoDigests":["registry.k8s.io/kube-apiserver@sha256:cdfd79dbc97fb3da60fefff3622fd35d6772e4db06f523eec4630979073fc611","registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.29.3"],"size":"123925451"},{"id":"0e9b4a0d1e86d942f5ed93eaf751771e7602104cac5e15256c36967770ad2775","repoDigests":["registry.k8s.io/kube-proxy@sha256:51e1a0d7b1254f98246e4967add615b35d8c25d2bf71e3ff64f7fe7c27fb8d79","registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd13041
15c6f3b3fb370407805ba52af3d9ff3f0863"],"repoTags":["registry.k8s.io/kube-proxy:v1.29.3"],"size":"86773651"},{"id":"4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d","repoDigests":["docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988","docker.io/kindest/kindnetd@sha256:fde0f6062db0a3b3323d76a4cde031f0f891b5b79d12be642b7e5aad68f2836f"],"repoTags":["docker.io/kindest/kindnetd:v20240202-8f1494ea"],"size":"60940831"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry
.k8s.io/pause:3.9"],"size":"520014"},{"id":"121d70d9a3805f44c7c587a60d9360495cf9d95129047f4818bb7110ec1ec195","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104","registry.k8s.io/kube-controller-manager@sha256:e89c6fb613c47831235c0758443a7a0b735ff97da7a41f9f820f3db035708c19"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.29.3"],"size":"118747956"},{"id":"070027a3cbe09ac697570e31174acc1699701bd0626d2cf71e01623f41a10f53","repoDigests":["docker.io/library/nginx@sha256:6db391d1c0cfb30588ba0bf72ea999404f2764febf0f1f196acd5867ac7efa7e","docker.io/library/nginx@sha256:757f33a85ed94069cf2e5c4ef4047d0e8d63d567bc7667925f886423f277fb3b"],"repoTags":["docker.io/library/nginx:latest"],"size":"196117976"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-contain
ers/addon-resizer:functional-351339"],"size":"34114467"},{"id":"2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:ba9e70dbdf0ff8a77ea63451bb1241d08819471730fe7a35a218a8db2ef7890c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"58812704"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kube
rnetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-351339 image ls --format json --alsologtostderr:
I0328 21:24:16.705031 1179250 out.go:291] Setting OutFile to fd 1 ...
I0328 21:24:16.706655 1179250 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0328 21:24:16.706702 1179250 out.go:304] Setting ErrFile to fd 2...
I0328 21:24:16.706722 1179250 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0328 21:24:16.707201 1179250 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17877-1145955/.minikube/bin
I0328 21:24:16.708040 1179250 config.go:182] Loaded profile config "functional-351339": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0328 21:24:16.708257 1179250 config.go:182] Loaded profile config "functional-351339": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0328 21:24:16.708825 1179250 cli_runner.go:164] Run: docker container inspect functional-351339 --format={{.State.Status}}
I0328 21:24:16.738625 1179250 ssh_runner.go:195] Run: systemctl --version
I0328 21:24:16.738748 1179250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-351339
I0328 21:24:16.768303 1179250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34273 SSHKeyPath:/home/jenkins/minikube-integration/17877-1145955/.minikube/machines/functional-351339/id_rsa Username:docker}
I0328 21:24:16.868324 1179250 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-351339 image ls --format yaml --alsologtostderr:
- id: 2581114f5709d3459ca39f243fd21fde75f2f60d205ffdcd57b4207c33980794
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:cdfd79dbc97fb3da60fefff3622fd35d6772e4db06f523eec4630979073fc611
- registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c
repoTags:
- registry.k8s.io/kube-apiserver:v1.29.3
size: "123925451"
- id: 0e9b4a0d1e86d942f5ed93eaf751771e7602104cac5e15256c36967770ad2775
repoDigests:
- registry.k8s.io/kube-proxy@sha256:51e1a0d7b1254f98246e4967add615b35d8c25d2bf71e3ff64f7fe7c27fb8d79
- registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863
repoTags:
- registry.k8s.io/kube-proxy:v1.29.3
size: "86773651"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 070027a3cbe09ac697570e31174acc1699701bd0626d2cf71e01623f41a10f53
repoDigests:
- docker.io/library/nginx@sha256:6db391d1c0cfb30588ba0bf72ea999404f2764febf0f1f196acd5867ac7efa7e
- docker.io/library/nginx@sha256:757f33a85ed94069cf2e5c4ef4047d0e8d63d567bc7667925f886423f277fb3b
repoTags:
- docker.io/library/nginx:latest
size: "196117976"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-351339
size: "34114467"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:ba9e70dbdf0ff8a77ea63451bb1241d08819471730fe7a35a218a8db2ef7890c
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "58812704"
- id: 014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd
repoDigests:
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
- registry.k8s.io/etcd@sha256:675d0e055ad04d9d6227bbb1aa88626bb1903a8b9b177c0353c6e1b3112952ad
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "140414767"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "520014"
- id: b8c82647e8a2586145e422943ae4c69c9b1600db636e1269efd256360eb396b0
repoDigests:
- docker.io/library/nginx@sha256:31bad00311cb5eeb8a6648beadcf67277a175da89989f14727420a80e2e76742
- docker.io/library/nginx@sha256:fe6e879bfe52091d423aa46efab8899ee4da7fdc7ed7baa558dcabf3823eb0d7
repoTags:
- docker.io/library/nginx:alpine
size: "45393258"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: 121d70d9a3805f44c7c587a60d9360495cf9d95129047f4818bb7110ec1ec195
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104
- registry.k8s.io/kube-controller-manager@sha256:e89c6fb613c47831235c0758443a7a0b735ff97da7a41f9f820f3db035708c19
repoTags:
- registry.k8s.io/kube-controller-manager:v1.29.3
size: "118747956"
- id: 4b51f9f6bc9b9a68473278361df0e8985109b56c7b649532c6bffcab2a8c65fb
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:107cad99dfbfbb6192d7cb685fc7702c9798cffb3fd63551fd00ae0009cf4612
- registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a
repoTags:
- registry.k8s.io/kube-scheduler:v1.29.3
size: "59175732"
- id: 4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d
repoDigests:
- docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988
- docker.io/kindest/kindnetd@sha256:fde0f6062db0a3b3323d76a4cde031f0f891b5b79d12be642b7e5aad68f2836f
repoTags:
- docker.io/kindest/kindnetd:v20240202-8f1494ea
size: "60940831"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-351339 image ls --format yaml --alsologtostderr:
I0328 21:24:16.384798 1179190 out.go:291] Setting OutFile to fd 1 ...
I0328 21:24:16.384959 1179190 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0328 21:24:16.385042 1179190 out.go:304] Setting ErrFile to fd 2...
I0328 21:24:16.385062 1179190 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0328 21:24:16.385330 1179190 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17877-1145955/.minikube/bin
I0328 21:24:16.386000 1179190 config.go:182] Loaded profile config "functional-351339": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0328 21:24:16.386178 1179190 config.go:182] Loaded profile config "functional-351339": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0328 21:24:16.386680 1179190 cli_runner.go:164] Run: docker container inspect functional-351339 --format={{.State.Status}}
I0328 21:24:16.406364 1179190 ssh_runner.go:195] Run: systemctl --version
I0328 21:24:16.406418 1179190 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-351339
I0328 21:24:16.427635 1179190 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34273 SSHKeyPath:/home/jenkins/minikube-integration/17877-1145955/.minikube/machines/functional-351339/id_rsa Username:docker}
I0328 21:24:16.521526 1179190 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-351339 ssh pgrep buildkitd: exit status 1 (349.34228ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 image build -t localhost/my-image:functional-351339 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-351339 image build -t localhost/my-image:functional-351339 testdata/build --alsologtostderr: (2.204132574s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-arm64 -p functional-351339 image build -t localhost/my-image:functional-351339 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 66d6d70951d
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-351339
--> de9fbc9d8e8
Successfully tagged localhost/my-image:functional-351339
de9fbc9d8e82ca3ffbb0523f5a1327298c3df785fb19c96008b15dccccc2f9ed
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-351339 image build -t localhost/my-image:functional-351339 testdata/build --alsologtostderr:
I0328 21:24:16.992660 1179322 out.go:291] Setting OutFile to fd 1 ...
I0328 21:24:16.993184 1179322 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0328 21:24:16.993198 1179322 out.go:304] Setting ErrFile to fd 2...
I0328 21:24:16.993203 1179322 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0328 21:24:16.993561 1179322 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17877-1145955/.minikube/bin
I0328 21:24:16.994313 1179322 config.go:182] Loaded profile config "functional-351339": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0328 21:24:16.997566 1179322 config.go:182] Loaded profile config "functional-351339": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0328 21:24:16.998489 1179322 cli_runner.go:164] Run: docker container inspect functional-351339 --format={{.State.Status}}
I0328 21:24:17.019321 1179322 ssh_runner.go:195] Run: systemctl --version
I0328 21:24:17.019382 1179322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-351339
I0328 21:24:17.048177 1179322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34273 SSHKeyPath:/home/jenkins/minikube-integration/17877-1145955/.minikube/machines/functional-351339/id_rsa Username:docker}
I0328 21:24:17.148436 1179322 build_images.go:161] Building image from path: /tmp/build.3592803915.tar
I0328 21:24:17.148501 1179322 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0328 21:24:17.158867 1179322 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3592803915.tar
I0328 21:24:17.163412 1179322 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3592803915.tar: stat -c "%s %y" /var/lib/minikube/build/build.3592803915.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3592803915.tar': No such file or directory
I0328 21:24:17.163439 1179322 ssh_runner.go:362] scp /tmp/build.3592803915.tar --> /var/lib/minikube/build/build.3592803915.tar (3072 bytes)
I0328 21:24:17.195670 1179322 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3592803915
I0328 21:24:17.217880 1179322 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3592803915 -xf /var/lib/minikube/build/build.3592803915.tar
I0328 21:24:17.226896 1179322 crio.go:315] Building image: /var/lib/minikube/build/build.3592803915
I0328 21:24:17.226993 1179322 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-351339 /var/lib/minikube/build/build.3592803915 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I0328 21:24:19.083765 1179322 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-351339 /var/lib/minikube/build/build.3592803915 --cgroup-manager=cgroupfs: (1.856744209s)
I0328 21:24:19.083836 1179322 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3592803915
I0328 21:24:19.092652 1179322 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3592803915.tar
I0328 21:24:19.101254 1179322 build_images.go:217] Built localhost/my-image:functional-351339 from /tmp/build.3592803915.tar
I0328 21:24:19.101283 1179322 build_images.go:133] succeeded building to: functional-351339
I0328 21:24:19.101290 1179322 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.801226753s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-351339
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (6.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 image load --daemon gcr.io/google-containers/addon-resizer:functional-351339 --alsologtostderr
2024/03/28 21:23:59 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-351339 image load --daemon gcr.io/google-containers/addon-resizer:functional-351339 --alsologtostderr: (5.909278925s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (6.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 image load --daemon gcr.io/google-containers/addon-resizer:functional-351339 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-351339 image load --daemon gcr.io/google-containers/addon-resizer:functional-351339 --alsologtostderr: (3.034292939s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.650625943s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-351339
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 image load --daemon gcr.io/google-containers/addon-resizer:functional-351339 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-351339 image load --daemon gcr.io/google-containers/addon-resizer:functional-351339 --alsologtostderr: (3.557306451s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 image save gcr.io/google-containers/addon-resizer:functional-351339 /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 image rm gcr.io/google-containers/addon-resizer:functional-351339 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-arm64 -p functional-351339 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr: (1.029921677s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-351339
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-351339 image save --daemon gcr.io/google-containers/addon-resizer:functional-351339 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-351339
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.91s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-351339
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-351339
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-351339
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (158.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-845197 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0328 21:25:19.655545 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/addons-564371/client.crt: no such file or directory
E0328 21:25:47.343947 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/addons-564371/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-845197 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (2m37.647457646s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-845197 status -v=7 --alsologtostderr
ha_test.go:107: (dbg) Done: out/minikube-linux-arm64 -p ha-845197 status -v=7 --alsologtostderr: (1.124733708s)
--- PASS: TestMultiControlPlane/serial/StartCluster (158.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-845197 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-845197 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-845197 -- rollout status deployment/busybox: (4.239828772s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-845197 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-845197 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-845197 -- exec busybox-7fdf7869d9-6bnxp -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-845197 -- exec busybox-7fdf7869d9-fmmvv -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-845197 -- exec busybox-7fdf7869d9-sb49c -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-845197 -- exec busybox-7fdf7869d9-6bnxp -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-845197 -- exec busybox-7fdf7869d9-fmmvv -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-845197 -- exec busybox-7fdf7869d9-sb49c -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-845197 -- exec busybox-7fdf7869d9-6bnxp -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-845197 -- exec busybox-7fdf7869d9-fmmvv -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-845197 -- exec busybox-7fdf7869d9-sb49c -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-845197 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-845197 -- exec busybox-7fdf7869d9-6bnxp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-845197 -- exec busybox-7fdf7869d9-6bnxp -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-845197 -- exec busybox-7fdf7869d9-fmmvv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-845197 -- exec busybox-7fdf7869d9-fmmvv -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-845197 -- exec busybox-7fdf7869d9-sb49c -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-845197 -- exec busybox-7fdf7869d9-sb49c -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (54.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-845197 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-845197 -v=7 --alsologtostderr: (53.607994697s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-845197 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (54.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-845197 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-845197 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-845197 cp testdata/cp-test.txt ha-845197:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-845197 ssh -n ha-845197 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-845197 cp ha-845197:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1868240153/001/cp-test_ha-845197.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-845197 ssh -n ha-845197 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-845197 cp ha-845197:/home/docker/cp-test.txt ha-845197-m02:/home/docker/cp-test_ha-845197_ha-845197-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-845197 ssh -n ha-845197 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-845197 ssh -n ha-845197-m02 "sudo cat /home/docker/cp-test_ha-845197_ha-845197-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-845197 cp ha-845197:/home/docker/cp-test.txt ha-845197-m03:/home/docker/cp-test_ha-845197_ha-845197-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-845197 ssh -n ha-845197 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-845197 ssh -n ha-845197-m03 "sudo cat /home/docker/cp-test_ha-845197_ha-845197-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-845197 cp ha-845197:/home/docker/cp-test.txt ha-845197-m04:/home/docker/cp-test_ha-845197_ha-845197-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-845197 ssh -n ha-845197 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-845197 ssh -n ha-845197-m04 "sudo cat /home/docker/cp-test_ha-845197_ha-845197-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-845197 cp testdata/cp-test.txt ha-845197-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-845197 ssh -n ha-845197-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-845197 cp ha-845197-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1868240153/001/cp-test_ha-845197-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-845197 ssh -n ha-845197-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-845197 cp ha-845197-m02:/home/docker/cp-test.txt ha-845197:/home/docker/cp-test_ha-845197-m02_ha-845197.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-845197 ssh -n ha-845197-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-845197 ssh -n ha-845197 "sudo cat /home/docker/cp-test_ha-845197-m02_ha-845197.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-845197 cp ha-845197-m02:/home/docker/cp-test.txt ha-845197-m03:/home/docker/cp-test_ha-845197-m02_ha-845197-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-845197 ssh -n ha-845197-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-845197 ssh -n ha-845197-m03 "sudo cat /home/docker/cp-test_ha-845197-m02_ha-845197-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-845197 cp ha-845197-m02:/home/docker/cp-test.txt ha-845197-m04:/home/docker/cp-test_ha-845197-m02_ha-845197-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-845197 ssh -n ha-845197-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-845197 ssh -n ha-845197-m04 "sudo cat /home/docker/cp-test_ha-845197-m02_ha-845197-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-845197 cp testdata/cp-test.txt ha-845197-m03:/home/docker/cp-test.txt
E0328 21:28:15.901571 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/functional-351339/client.crt: no such file or directory
E0328 21:28:15.906888 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/functional-351339/client.crt: no such file or directory
E0328 21:28:15.917159 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/functional-351339/client.crt: no such file or directory
E0328 21:28:15.937433 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/functional-351339/client.crt: no such file or directory
E0328 21:28:15.977739 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/functional-351339/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-845197 ssh -n ha-845197-m03 "sudo cat /home/docker/cp-test.txt"
E0328 21:28:16.058602 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/functional-351339/client.crt: no such file or directory
E0328 21:28:16.218937 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/functional-351339/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-845197 cp ha-845197-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1868240153/001/cp-test_ha-845197-m03.txt
E0328 21:28:16.539089 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/functional-351339/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-845197 ssh -n ha-845197-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-845197 cp ha-845197-m03:/home/docker/cp-test.txt ha-845197:/home/docker/cp-test_ha-845197-m03_ha-845197.txt
E0328 21:28:17.179828 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/functional-351339/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-845197 ssh -n ha-845197-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-845197 ssh -n ha-845197 "sudo cat /home/docker/cp-test_ha-845197-m03_ha-845197.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-845197 cp ha-845197-m03:/home/docker/cp-test.txt ha-845197-m02:/home/docker/cp-test_ha-845197-m03_ha-845197-m02.txt
E0328 21:28:18.460864 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/functional-351339/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-845197 ssh -n ha-845197-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-845197 ssh -n ha-845197-m02 "sudo cat /home/docker/cp-test_ha-845197-m03_ha-845197-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-845197 cp ha-845197-m03:/home/docker/cp-test.txt ha-845197-m04:/home/docker/cp-test_ha-845197-m03_ha-845197-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-845197 ssh -n ha-845197-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-845197 ssh -n ha-845197-m04 "sudo cat /home/docker/cp-test_ha-845197-m03_ha-845197-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-845197 cp testdata/cp-test.txt ha-845197-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-845197 ssh -n ha-845197-m04 "sudo cat /home/docker/cp-test.txt"
E0328 21:28:21.021827 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/functional-351339/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-845197 cp ha-845197-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1868240153/001/cp-test_ha-845197-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-845197 ssh -n ha-845197-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-845197 cp ha-845197-m04:/home/docker/cp-test.txt ha-845197:/home/docker/cp-test_ha-845197-m04_ha-845197.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-845197 ssh -n ha-845197-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-845197 ssh -n ha-845197 "sudo cat /home/docker/cp-test_ha-845197-m04_ha-845197.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-845197 cp ha-845197-m04:/home/docker/cp-test.txt ha-845197-m02:/home/docker/cp-test_ha-845197-m04_ha-845197-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-845197 ssh -n ha-845197-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-845197 ssh -n ha-845197-m02 "sudo cat /home/docker/cp-test_ha-845197-m04_ha-845197-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-845197 cp ha-845197-m04:/home/docker/cp-test.txt ha-845197-m03:/home/docker/cp-test_ha-845197-m04_ha-845197-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-845197 ssh -n ha-845197-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-845197 ssh -n ha-845197-m03 "sudo cat /home/docker/cp-test_ha-845197-m04_ha-845197-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-845197 node stop m02 -v=7 --alsologtostderr
E0328 21:28:26.142984 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/functional-351339/client.crt: no such file or directory
E0328 21:28:36.383553 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/functional-351339/client.crt: no such file or directory
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-845197 node stop m02 -v=7 --alsologtostderr: (12.020239306s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-845197 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-845197 status -v=7 --alsologtostderr: exit status 7 (741.614411ms)

                                                
                                                
-- stdout --
	ha-845197
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-845197-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-845197-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-845197-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 21:28:37.096005 1194114 out.go:291] Setting OutFile to fd 1 ...
	I0328 21:28:37.096233 1194114 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 21:28:37.096244 1194114 out.go:304] Setting ErrFile to fd 2...
	I0328 21:28:37.096250 1194114 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 21:28:37.096546 1194114 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17877-1145955/.minikube/bin
	I0328 21:28:37.096753 1194114 out.go:298] Setting JSON to false
	I0328 21:28:37.096785 1194114 mustload.go:65] Loading cluster: ha-845197
	I0328 21:28:37.096835 1194114 notify.go:220] Checking for updates...
	I0328 21:28:37.097257 1194114 config.go:182] Loaded profile config "ha-845197": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 21:28:37.097271 1194114 status.go:255] checking status of ha-845197 ...
	I0328 21:28:37.097869 1194114 cli_runner.go:164] Run: docker container inspect ha-845197 --format={{.State.Status}}
	I0328 21:28:37.116767 1194114 status.go:330] ha-845197 host status = "Running" (err=<nil>)
	I0328 21:28:37.116796 1194114 host.go:66] Checking if "ha-845197" exists ...
	I0328 21:28:37.117106 1194114 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-845197
	I0328 21:28:37.134379 1194114 host.go:66] Checking if "ha-845197" exists ...
	I0328 21:28:37.134700 1194114 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0328 21:28:37.134753 1194114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-845197
	I0328 21:28:37.153796 1194114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34278 SSHKeyPath:/home/jenkins/minikube-integration/17877-1145955/.minikube/machines/ha-845197/id_rsa Username:docker}
	I0328 21:28:37.253417 1194114 ssh_runner.go:195] Run: systemctl --version
	I0328 21:28:37.257927 1194114 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 21:28:37.269553 1194114 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0328 21:28:37.326895 1194114 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:72 SystemTime:2024-03-28 21:28:37.317186044 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0328 21:28:37.327500 1194114 kubeconfig.go:125] found "ha-845197" server: "https://192.168.49.254:8443"
	I0328 21:28:37.327530 1194114 api_server.go:166] Checking apiserver status ...
	I0328 21:28:37.327578 1194114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 21:28:37.339572 1194114 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1411/cgroup
	I0328 21:28:37.350541 1194114 api_server.go:182] apiserver freezer: "4:freezer:/docker/7dc69024e7b71e250800f1d9d8d1c5f0eda90ba9580a8a95d52173ed48d3d7dd/crio/crio-aa63498acfd2fbb655056bd74a75bd0c98da22d80c2c669602ebe6cd4da02220"
	I0328 21:28:37.350649 1194114 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/7dc69024e7b71e250800f1d9d8d1c5f0eda90ba9580a8a95d52173ed48d3d7dd/crio/crio-aa63498acfd2fbb655056bd74a75bd0c98da22d80c2c669602ebe6cd4da02220/freezer.state
	I0328 21:28:37.359655 1194114 api_server.go:204] freezer state: "THAWED"
	I0328 21:28:37.359684 1194114 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0328 21:28:37.367637 1194114 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0328 21:28:37.367665 1194114 status.go:422] ha-845197 apiserver status = Running (err=<nil>)
	I0328 21:28:37.367681 1194114 status.go:257] ha-845197 status: &{Name:ha-845197 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0328 21:28:37.367699 1194114 status.go:255] checking status of ha-845197-m02 ...
	I0328 21:28:37.368026 1194114 cli_runner.go:164] Run: docker container inspect ha-845197-m02 --format={{.State.Status}}
	I0328 21:28:37.383528 1194114 status.go:330] ha-845197-m02 host status = "Stopped" (err=<nil>)
	I0328 21:28:37.383548 1194114 status.go:343] host is not running, skipping remaining checks
	I0328 21:28:37.383555 1194114 status.go:257] ha-845197-m02 status: &{Name:ha-845197-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0328 21:28:37.383575 1194114 status.go:255] checking status of ha-845197-m03 ...
	I0328 21:28:37.383877 1194114 cli_runner.go:164] Run: docker container inspect ha-845197-m03 --format={{.State.Status}}
	I0328 21:28:37.398278 1194114 status.go:330] ha-845197-m03 host status = "Running" (err=<nil>)
	I0328 21:28:37.398303 1194114 host.go:66] Checking if "ha-845197-m03" exists ...
	I0328 21:28:37.398599 1194114 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-845197-m03
	I0328 21:28:37.414851 1194114 host.go:66] Checking if "ha-845197-m03" exists ...
	I0328 21:28:37.415189 1194114 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0328 21:28:37.415228 1194114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-845197-m03
	I0328 21:28:37.440188 1194114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34288 SSHKeyPath:/home/jenkins/minikube-integration/17877-1145955/.minikube/machines/ha-845197-m03/id_rsa Username:docker}
	I0328 21:28:37.537660 1194114 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 21:28:37.551113 1194114 kubeconfig.go:125] found "ha-845197" server: "https://192.168.49.254:8443"
	I0328 21:28:37.551144 1194114 api_server.go:166] Checking apiserver status ...
	I0328 21:28:37.551194 1194114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 21:28:37.563582 1194114 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1345/cgroup
	I0328 21:28:37.575022 1194114 api_server.go:182] apiserver freezer: "4:freezer:/docker/da067ad1e49b50f0da22437e98b53a96208240c377661694095b82a691c79718/crio/crio-fd308a22606f3772f44e1ed45a7ada3bdd37cbc0d6163f67a6cf5ab63b14fbc8"
	I0328 21:28:37.575103 1194114 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/da067ad1e49b50f0da22437e98b53a96208240c377661694095b82a691c79718/crio/crio-fd308a22606f3772f44e1ed45a7ada3bdd37cbc0d6163f67a6cf5ab63b14fbc8/freezer.state
	I0328 21:28:37.584065 1194114 api_server.go:204] freezer state: "THAWED"
	I0328 21:28:37.584179 1194114 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0328 21:28:37.592194 1194114 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0328 21:28:37.592268 1194114 status.go:422] ha-845197-m03 apiserver status = Running (err=<nil>)
	I0328 21:28:37.592286 1194114 status.go:257] ha-845197-m03 status: &{Name:ha-845197-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0328 21:28:37.592305 1194114 status.go:255] checking status of ha-845197-m04 ...
	I0328 21:28:37.592640 1194114 cli_runner.go:164] Run: docker container inspect ha-845197-m04 --format={{.State.Status}}
	I0328 21:28:37.610120 1194114 status.go:330] ha-845197-m04 host status = "Running" (err=<nil>)
	I0328 21:28:37.610147 1194114 host.go:66] Checking if "ha-845197-m04" exists ...
	I0328 21:28:37.610460 1194114 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-845197-m04
	I0328 21:28:37.627532 1194114 host.go:66] Checking if "ha-845197-m04" exists ...
	I0328 21:28:37.627988 1194114 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0328 21:28:37.628188 1194114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-845197-m04
	I0328 21:28:37.646911 1194114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34293 SSHKeyPath:/home/jenkins/minikube-integration/17877-1145955/.minikube/machines/ha-845197-m04/id_rsa Username:docker}
	I0328 21:28:37.741203 1194114 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 21:28:37.752760 1194114 status.go:257] ha-845197-m04 status: &{Name:ha-845197-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (34.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-845197 node start m02 -v=7 --alsologtostderr
E0328 21:28:56.864651 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/functional-351339/client.crt: no such file or directory
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-845197 node start m02 -v=7 --alsologtostderr: (32.72563108s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-845197 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-845197 status -v=7 --alsologtostderr: (1.520389779s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (34.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (5.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (5.48413649s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (5.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (212.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-845197 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-845197 -v=7 --alsologtostderr
E0328 21:29:37.824873 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/functional-351339/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-845197 -v=7 --alsologtostderr: (36.802926074s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-845197 --wait=true -v=7 --alsologtostderr
E0328 21:30:19.655616 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/addons-564371/client.crt: no such file or directory
E0328 21:30:59.745361 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/functional-351339/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-845197 --wait=true -v=7 --alsologtostderr: (2m55.113731362s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-845197
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (212.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-845197 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-845197 node delete m03 -v=7 --alsologtostderr: (11.911299833s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-845197 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-845197 stop -v=7 --alsologtostderr
E0328 21:33:15.901404 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/functional-351339/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-845197 stop -v=7 --alsologtostderr: (35.62094774s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-845197 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-845197 status -v=7 --alsologtostderr: exit status 7 (108.803502ms)

                                                
                                                
-- stdout --
	ha-845197
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-845197-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-845197-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 21:33:39.418238 1208209 out.go:291] Setting OutFile to fd 1 ...
	I0328 21:33:39.418425 1208209 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 21:33:39.418454 1208209 out.go:304] Setting ErrFile to fd 2...
	I0328 21:33:39.418477 1208209 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 21:33:39.418738 1208209 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17877-1145955/.minikube/bin
	I0328 21:33:39.418958 1208209 out.go:298] Setting JSON to false
	I0328 21:33:39.419018 1208209 mustload.go:65] Loading cluster: ha-845197
	I0328 21:33:39.419129 1208209 notify.go:220] Checking for updates...
	I0328 21:33:39.419443 1208209 config.go:182] Loaded profile config "ha-845197": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 21:33:39.419468 1208209 status.go:255] checking status of ha-845197 ...
	I0328 21:33:39.419985 1208209 cli_runner.go:164] Run: docker container inspect ha-845197 --format={{.State.Status}}
	I0328 21:33:39.436351 1208209 status.go:330] ha-845197 host status = "Stopped" (err=<nil>)
	I0328 21:33:39.436373 1208209 status.go:343] host is not running, skipping remaining checks
	I0328 21:33:39.436381 1208209 status.go:257] ha-845197 status: &{Name:ha-845197 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0328 21:33:39.436412 1208209 status.go:255] checking status of ha-845197-m02 ...
	I0328 21:33:39.436717 1208209 cli_runner.go:164] Run: docker container inspect ha-845197-m02 --format={{.State.Status}}
	I0328 21:33:39.452511 1208209 status.go:330] ha-845197-m02 host status = "Stopped" (err=<nil>)
	I0328 21:33:39.452535 1208209 status.go:343] host is not running, skipping remaining checks
	I0328 21:33:39.452543 1208209 status.go:257] ha-845197-m02 status: &{Name:ha-845197-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0328 21:33:39.452563 1208209 status.go:255] checking status of ha-845197-m04 ...
	I0328 21:33:39.452870 1208209 cli_runner.go:164] Run: docker container inspect ha-845197-m04 --format={{.State.Status}}
	I0328 21:33:39.468069 1208209 status.go:330] ha-845197-m04 host status = "Stopped" (err=<nil>)
	I0328 21:33:39.468130 1208209 status.go:343] host is not running, skipping remaining checks
	I0328 21:33:39.468139 1208209 status.go:257] ha-845197-m04 status: &{Name:ha-845197-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (120.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-845197 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0328 21:33:43.585733 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/functional-351339/client.crt: no such file or directory
E0328 21:35:19.656349 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/addons-564371/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-845197 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (1m59.12133495s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-845197 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (120.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (62.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-845197 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-845197 --control-plane -v=7 --alsologtostderr: (1m1.344825754s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-845197 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (62.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
E0328 21:36:42.704188 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/addons-564371/client.crt: no such file or directory
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.78s)

                                                
                                    
x
+
TestJSONOutput/start/Command (79.78s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-629405 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-629405 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m19.772978499s)
--- PASS: TestJSONOutput/start/Command (79.78s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.74s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-629405 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.74s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-629405 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.77s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-629405 --output=json --user=testUser
E0328 21:38:15.901031 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/functional-351339/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-629405 --output=json --user=testUser: (5.770341078s)
--- PASS: TestJSONOutput/stop/Command (5.77s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-644400 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-644400 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (91.023031ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5952cc17-5303-4ee1-bea5-e96419550883","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-644400] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b3a693b0-374c-4120-b9fd-930e10775c31","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17877"}}
	{"specversion":"1.0","id":"bfc6b126-7bf2-4ac6-815e-118246e1b84d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e65789e3-ab2f-471e-a6d7-5577a21d3f67","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17877-1145955/kubeconfig"}}
	{"specversion":"1.0","id":"cb75ffa9-c993-474f-8389-1abe3dc8ee7f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17877-1145955/.minikube"}}
	{"specversion":"1.0","id":"86487c39-c428-42b1-afd7-65931e72767e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"288adf33-166d-4957-9053-13c2b74a8314","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"0259d058-5c68-4475-aaab-e5ffce79d766","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-644400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-644400
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (44.04s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-347929 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-347929 --network=: (41.899646282s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-347929" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-347929
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-347929: (2.123867304s)
--- PASS: TestKicCustomNetwork/create_custom_network (44.04s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (34.75s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-807413 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-807413 --network=bridge: (32.672671888s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-807413" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-807413
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-807413: (2.055317276s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (34.75s)

                                                
                                    
x
+
TestKicExistingNetwork (33.14s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-516783 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-516783 --network=existing-network: (31.043445532s)
helpers_test.go:175: Cleaning up "existing-network-516783" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-516783
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-516783: (1.964980986s)
--- PASS: TestKicExistingNetwork (33.14s)

                                                
                                    
x
+
TestKicCustomSubnet (32.96s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-184575 --subnet=192.168.60.0/24
E0328 21:40:19.657028 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/addons-564371/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-184575 --subnet=192.168.60.0/24: (30.807085478s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-184575 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-184575" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-184575
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-184575: (2.13156324s)
--- PASS: TestKicCustomSubnet (32.96s)

                                                
                                    
x
+
TestKicStaticIP (34.01s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-790865 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-790865 --static-ip=192.168.200.200: (31.760562499s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-790865 ip
helpers_test.go:175: Cleaning up "static-ip-790865" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-790865
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-790865: (2.09324618s)
--- PASS: TestKicStaticIP (34.01s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (71.58s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-400173 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-400173 --driver=docker  --container-runtime=crio: (35.059853324s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-402799 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-402799 --driver=docker  --container-runtime=crio: (31.343968522s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-400173
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-402799
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-402799" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-402799
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-402799: (1.952574406s)
helpers_test.go:175: Cleaning up "first-400173" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-400173
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-400173: (1.948533645s)
--- PASS: TestMinikubeProfile (71.58s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.09s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-449887 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-449887 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.091507432s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.09s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-449887 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.58s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-462720 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-462720 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.581669736s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.58s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-462720 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.59s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-449887 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-449887 --alsologtostderr -v=5: (1.592903225s)
--- PASS: TestMountStart/serial/DeleteFirst (1.59s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-462720 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-462720
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-462720: (1.213766793s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.81s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-462720
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-462720: (6.808359079s)
--- PASS: TestMountStart/serial/RestartStopped (7.81s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-462720 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (119.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-425497 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0328 21:43:15.901262 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/functional-351339/client.crt: no such file or directory
E0328 21:44:38.946812 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/functional-351339/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-425497 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m59.335898591s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425497 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (119.89s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-425497 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-425497 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-425497 -- rollout status deployment/busybox: (3.818872905s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-425497 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-425497 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-425497 -- exec busybox-7fdf7869d9-6gmmg -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-425497 -- exec busybox-7fdf7869d9-r2jzn -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-425497 -- exec busybox-7fdf7869d9-6gmmg -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-425497 -- exec busybox-7fdf7869d9-r2jzn -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-425497 -- exec busybox-7fdf7869d9-6gmmg -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-425497 -- exec busybox-7fdf7869d9-r2jzn -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.80s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-425497 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-425497 -- exec busybox-7fdf7869d9-6gmmg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-425497 -- exec busybox-7fdf7869d9-6gmmg -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-425497 -- exec busybox-7fdf7869d9-r2jzn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-425497 -- exec busybox-7fdf7869d9-r2jzn -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.04s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (49.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-425497 -v 3 --alsologtostderr
E0328 21:45:19.656304 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/addons-564371/client.crt: no such file or directory
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-425497 -v 3 --alsologtostderr: (48.711771073s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425497 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (49.36s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-425497 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.33s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425497 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425497 cp testdata/cp-test.txt multinode-425497:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425497 ssh -n multinode-425497 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425497 cp multinode-425497:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2413887451/001/cp-test_multinode-425497.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425497 ssh -n multinode-425497 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425497 cp multinode-425497:/home/docker/cp-test.txt multinode-425497-m02:/home/docker/cp-test_multinode-425497_multinode-425497-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425497 ssh -n multinode-425497 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425497 ssh -n multinode-425497-m02 "sudo cat /home/docker/cp-test_multinode-425497_multinode-425497-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425497 cp multinode-425497:/home/docker/cp-test.txt multinode-425497-m03:/home/docker/cp-test_multinode-425497_multinode-425497-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425497 ssh -n multinode-425497 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425497 ssh -n multinode-425497-m03 "sudo cat /home/docker/cp-test_multinode-425497_multinode-425497-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425497 cp testdata/cp-test.txt multinode-425497-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425497 ssh -n multinode-425497-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425497 cp multinode-425497-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2413887451/001/cp-test_multinode-425497-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425497 ssh -n multinode-425497-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425497 cp multinode-425497-m02:/home/docker/cp-test.txt multinode-425497:/home/docker/cp-test_multinode-425497-m02_multinode-425497.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425497 ssh -n multinode-425497-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425497 ssh -n multinode-425497 "sudo cat /home/docker/cp-test_multinode-425497-m02_multinode-425497.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425497 cp multinode-425497-m02:/home/docker/cp-test.txt multinode-425497-m03:/home/docker/cp-test_multinode-425497-m02_multinode-425497-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425497 ssh -n multinode-425497-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425497 ssh -n multinode-425497-m03 "sudo cat /home/docker/cp-test_multinode-425497-m02_multinode-425497-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425497 cp testdata/cp-test.txt multinode-425497-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425497 ssh -n multinode-425497-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425497 cp multinode-425497-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2413887451/001/cp-test_multinode-425497-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425497 ssh -n multinode-425497-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425497 cp multinode-425497-m03:/home/docker/cp-test.txt multinode-425497:/home/docker/cp-test_multinode-425497-m03_multinode-425497.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425497 ssh -n multinode-425497-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425497 ssh -n multinode-425497 "sudo cat /home/docker/cp-test_multinode-425497-m03_multinode-425497.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425497 cp multinode-425497-m03:/home/docker/cp-test.txt multinode-425497-m02:/home/docker/cp-test_multinode-425497-m03_multinode-425497-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425497 ssh -n multinode-425497-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425497 ssh -n multinode-425497-m02 "sudo cat /home/docker/cp-test_multinode-425497-m03_multinode-425497-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.35s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425497 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-425497 node stop m03: (1.218006504s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425497 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-425497 status: exit status 7 (500.857891ms)

                                                
                                                
-- stdout --
	multinode-425497
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-425497-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-425497-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425497 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-425497 status --alsologtostderr: exit status 7 (580.557484ms)

                                                
                                                
-- stdout --
	multinode-425497
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-425497-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-425497-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 21:46:09.968686 1260542 out.go:291] Setting OutFile to fd 1 ...
	I0328 21:46:09.968880 1260542 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 21:46:09.968893 1260542 out.go:304] Setting ErrFile to fd 2...
	I0328 21:46:09.968898 1260542 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 21:46:09.969178 1260542 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17877-1145955/.minikube/bin
	I0328 21:46:09.969431 1260542 out.go:298] Setting JSON to false
	I0328 21:46:09.969494 1260542 mustload.go:65] Loading cluster: multinode-425497
	I0328 21:46:09.969582 1260542 notify.go:220] Checking for updates...
	I0328 21:46:09.970826 1260542 config.go:182] Loaded profile config "multinode-425497": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 21:46:09.970853 1260542 status.go:255] checking status of multinode-425497 ...
	I0328 21:46:09.971488 1260542 cli_runner.go:164] Run: docker container inspect multinode-425497 --format={{.State.Status}}
	I0328 21:46:09.996066 1260542 status.go:330] multinode-425497 host status = "Running" (err=<nil>)
	I0328 21:46:09.996117 1260542 host.go:66] Checking if "multinode-425497" exists ...
	I0328 21:46:09.996413 1260542 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-425497
	I0328 21:46:10.067761 1260542 host.go:66] Checking if "multinode-425497" exists ...
	I0328 21:46:10.068072 1260542 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0328 21:46:10.068212 1260542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-425497
	I0328 21:46:10.085697 1260542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34398 SSHKeyPath:/home/jenkins/minikube-integration/17877-1145955/.minikube/machines/multinode-425497/id_rsa Username:docker}
	I0328 21:46:10.186654 1260542 ssh_runner.go:195] Run: systemctl --version
	I0328 21:46:10.191439 1260542 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 21:46:10.204217 1260542 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0328 21:46:10.267892 1260542 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:62 SystemTime:2024-03-28 21:46:10.257498711 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0328 21:46:10.268678 1260542 kubeconfig.go:125] found "multinode-425497" server: "https://192.168.67.2:8443"
	I0328 21:46:10.268720 1260542 api_server.go:166] Checking apiserver status ...
	I0328 21:46:10.268775 1260542 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 21:46:10.280407 1260542 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1378/cgroup
	I0328 21:46:10.290086 1260542 api_server.go:182] apiserver freezer: "4:freezer:/docker/7caeff31a4033c4dbb864d6c9a824f348bcdc6dbf99283244818d70d1df6b582/crio/crio-8da18eb9f9fb4a7c63204bb8ec052c42a14228c71f5010d0c84a05b12f0015e8"
	I0328 21:46:10.290216 1260542 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/7caeff31a4033c4dbb864d6c9a824f348bcdc6dbf99283244818d70d1df6b582/crio/crio-8da18eb9f9fb4a7c63204bb8ec052c42a14228c71f5010d0c84a05b12f0015e8/freezer.state
	I0328 21:46:10.299145 1260542 api_server.go:204] freezer state: "THAWED"
	I0328 21:46:10.299181 1260542 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0328 21:46:10.307002 1260542 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0328 21:46:10.307030 1260542 status.go:422] multinode-425497 apiserver status = Running (err=<nil>)
	I0328 21:46:10.307041 1260542 status.go:257] multinode-425497 status: &{Name:multinode-425497 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0328 21:46:10.307071 1260542 status.go:255] checking status of multinode-425497-m02 ...
	I0328 21:46:10.307380 1260542 cli_runner.go:164] Run: docker container inspect multinode-425497-m02 --format={{.State.Status}}
	I0328 21:46:10.323262 1260542 status.go:330] multinode-425497-m02 host status = "Running" (err=<nil>)
	I0328 21:46:10.323294 1260542 host.go:66] Checking if "multinode-425497-m02" exists ...
	I0328 21:46:10.323606 1260542 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-425497-m02
	I0328 21:46:10.338625 1260542 host.go:66] Checking if "multinode-425497-m02" exists ...
	I0328 21:46:10.338983 1260542 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0328 21:46:10.339077 1260542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-425497-m02
	I0328 21:46:10.354598 1260542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34403 SSHKeyPath:/home/jenkins/minikube-integration/17877-1145955/.minikube/machines/multinode-425497-m02/id_rsa Username:docker}
	I0328 21:46:10.449545 1260542 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 21:46:10.461765 1260542 status.go:257] multinode-425497-m02 status: &{Name:multinode-425497-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0328 21:46:10.461801 1260542 status.go:255] checking status of multinode-425497-m03 ...
	I0328 21:46:10.462116 1260542 cli_runner.go:164] Run: docker container inspect multinode-425497-m03 --format={{.State.Status}}
	I0328 21:46:10.482560 1260542 status.go:330] multinode-425497-m03 host status = "Stopped" (err=<nil>)
	I0328 21:46:10.482587 1260542 status.go:343] host is not running, skipping remaining checks
	I0328 21:46:10.482595 1260542 status.go:257] multinode-425497-m03 status: &{Name:multinode-425497-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.30s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425497 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-425497 node start m03 -v=7 --alsologtostderr: (9.087935614s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425497 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.85s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (103.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-425497
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-425497
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-425497: (24.914056109s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-425497 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-425497 --wait=true -v=8 --alsologtostderr: (1m18.684001879s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-425497
--- PASS: TestMultiNode/serial/RestartKeepsNodes (103.76s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425497 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-425497 node delete m03: (4.888646891s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425497 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.57s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425497 stop
E0328 21:48:15.901023 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/functional-351339/client.crt: no such file or directory
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-425497 stop: (23.625384228s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425497 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-425497 status: exit status 7 (90.257735ms)

                                                
                                                
-- stdout --
	multinode-425497
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-425497-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425497 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-425497 status --alsologtostderr: exit status 7 (94.242431ms)

                                                
                                                
-- stdout --
	multinode-425497
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-425497-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 21:48:33.443740 1267913 out.go:291] Setting OutFile to fd 1 ...
	I0328 21:48:33.443853 1267913 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 21:48:33.443863 1267913 out.go:304] Setting ErrFile to fd 2...
	I0328 21:48:33.443869 1267913 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 21:48:33.444129 1267913 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17877-1145955/.minikube/bin
	I0328 21:48:33.444332 1267913 out.go:298] Setting JSON to false
	I0328 21:48:33.444364 1267913 mustload.go:65] Loading cluster: multinode-425497
	I0328 21:48:33.444466 1267913 notify.go:220] Checking for updates...
	I0328 21:48:33.444768 1267913 config.go:182] Loaded profile config "multinode-425497": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0328 21:48:33.444780 1267913 status.go:255] checking status of multinode-425497 ...
	I0328 21:48:33.445261 1267913 cli_runner.go:164] Run: docker container inspect multinode-425497 --format={{.State.Status}}
	I0328 21:48:33.463120 1267913 status.go:330] multinode-425497 host status = "Stopped" (err=<nil>)
	I0328 21:48:33.463141 1267913 status.go:343] host is not running, skipping remaining checks
	I0328 21:48:33.463148 1267913 status.go:257] multinode-425497 status: &{Name:multinode-425497 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0328 21:48:33.463180 1267913 status.go:255] checking status of multinode-425497-m02 ...
	I0328 21:48:33.463478 1267913 cli_runner.go:164] Run: docker container inspect multinode-425497-m02 --format={{.State.Status}}
	I0328 21:48:33.478681 1267913 status.go:330] multinode-425497-m02 host status = "Stopped" (err=<nil>)
	I0328 21:48:33.478723 1267913 status.go:343] host is not running, skipping remaining checks
	I0328 21:48:33.478731 1267913 status.go:257] multinode-425497-m02 status: &{Name:multinode-425497-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.81s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (64.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-425497 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-425497 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m4.022013544s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-425497 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (64.71s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (35.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-425497
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-425497-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-425497-m02 --driver=docker  --container-runtime=crio: exit status 14 (94.455764ms)

                                                
                                                
-- stdout --
	* [multinode-425497-m02] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17877
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17877-1145955/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17877-1145955/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-425497-m02' is duplicated with machine name 'multinode-425497-m02' in profile 'multinode-425497'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-425497-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-425497-m03 --driver=docker  --container-runtime=crio: (33.060246793s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-425497
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-425497: exit status 80 (308.532191ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-425497 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-425497-m03 already exists in multinode-425497-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-425497-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-425497-m03: (1.933652039s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (35.46s)

                                                
                                    
x
+
TestPreload (121.3s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-263515 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E0328 21:50:19.656007 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/addons-564371/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-263515 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m26.463191795s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-263515 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-263515 image pull gcr.io/k8s-minikube/busybox: (1.795917829s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-263515
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-263515: (5.809326814s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-263515 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-263515 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (24.618238988s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-263515 image list
helpers_test.go:175: Cleaning up "test-preload-263515" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-263515
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-263515: (2.360332026s)
--- PASS: TestPreload (121.30s)

                                                
                                    
x
+
TestScheduledStopUnix (107.63s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-880708 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-880708 --memory=2048 --driver=docker  --container-runtime=crio: (30.685755135s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-880708 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-880708 -n scheduled-stop-880708
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-880708 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-880708 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-880708 -n scheduled-stop-880708
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-880708
E0328 21:53:15.901952 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/functional-351339/client.crt: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-880708 --schedule 15s
E0328 21:53:22.705252 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/addons-564371/client.crt: no such file or directory
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-880708
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-880708: exit status 7 (81.417817ms)

                                                
                                                
-- stdout --
	scheduled-stop-880708
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-880708 -n scheduled-stop-880708
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-880708 -n scheduled-stop-880708: exit status 7 (77.595984ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-880708" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-880708
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-880708: (5.249396323s)
--- PASS: TestScheduledStopUnix (107.63s)

                                                
                                    
x
+
TestInsufficientStorage (10.65s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-868177 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-868177 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (8.176193536s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9f60f03e-fba6-40f5-95f7-603dd0cc10a0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-868177] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d4692266-177d-4eaa-afb7-2ce51b7834d6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17877"}}
	{"specversion":"1.0","id":"020dd3db-695c-4c21-9b7f-059d401bf973","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"77b122c5-230c-41f1-b480-dc5ecb33fa6b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17877-1145955/kubeconfig"}}
	{"specversion":"1.0","id":"a78cbfc1-6c45-40fa-af50-03203461b081","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17877-1145955/.minikube"}}
	{"specversion":"1.0","id":"174b47f5-9322-488f-99a4-ea85555608d8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"a3def7ca-db0a-4c97-88b1-106dd00b5ca9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"2d64d990-dbab-43ba-99b6-cd5ab57d7f51","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"7e13f97d-2ff4-48bc-a255-588cdd341bd2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"d074a750-2bce-44bd-858e-f4ccff9e81c6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"337a6eec-fd90-4453-b187-f02e855db329","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"40fce236-4c81-488b-ac8f-84ad5bd87f81","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-868177\" primary control-plane node in \"insufficient-storage-868177\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"0aa27b7a-b34e-4e3d-b66d-91f6df9016ba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.43-1711559786-18485 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"4f7cec24-f35f-45e9-bddb-4befd96fe408","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"74452abd-b774-4ed9-8545-b7c5895af7a4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-868177 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-868177 --output=json --layout=cluster: exit status 7 (293.545901ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-868177","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.33.0-beta.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-868177","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0328 21:54:15.099252 1284320 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-868177" does not appear in /home/jenkins/minikube-integration/17877-1145955/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-868177 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-868177 --output=json --layout=cluster: exit status 7 (297.934672ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-868177","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.33.0-beta.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-868177","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0328 21:54:15.402016 1284375 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-868177" does not appear in /home/jenkins/minikube-integration/17877-1145955/kubeconfig
	E0328 21:54:15.412197 1284375 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/insufficient-storage-868177/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-868177" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-868177
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-868177: (1.881492609s)
--- PASS: TestInsufficientStorage (10.65s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (78.94s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.754278963 start -p running-upgrade-630683 --memory=2200 --vm-driver=docker  --container-runtime=crio
E0328 21:58:15.901684 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/functional-351339/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.754278963 start -p running-upgrade-630683 --memory=2200 --vm-driver=docker  --container-runtime=crio: (33.875489958s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-630683 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-630683 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (41.151620811s)
helpers_test.go:175: Cleaning up "running-upgrade-630683" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-630683
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-630683: (2.734592504s)
--- PASS: TestRunningBinaryUpgrade (78.94s)

                                                
                                    
x
+
TestKubernetesUpgrade (396.53s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-037126 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-037126 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m8.391633771s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-037126
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-037126: (1.337882814s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-037126 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-037126 status --format={{.Host}}: exit status 7 (87.564961ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-037126 --memory=2200 --kubernetes-version=v1.30.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-037126 --memory=2200 --kubernetes-version=v1.30.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m45.761308157s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-037126 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-037126 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-037126 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (120.365091ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-037126] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17877
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17877-1145955/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17877-1145955/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.0-beta.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-037126
	    minikube start -p kubernetes-upgrade-037126 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0371262 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-037126 --kubernetes-version=v1.30.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-037126 --memory=2200 --kubernetes-version=v1.30.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-037126 --memory=2200 --kubernetes-version=v1.30.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (38.44455822s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-037126" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-037126
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-037126: (2.276862424s)
--- PASS: TestKubernetesUpgrade (396.53s)

                                                
                                    
x
+
TestMissingContainerUpgrade (152.52s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.3042054400 start -p missing-upgrade-112304 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.3042054400 start -p missing-upgrade-112304 --memory=2200 --driver=docker  --container-runtime=crio: (1m13.969213677s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-112304
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-112304: (13.483756646s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-112304
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-112304 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-112304 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m1.577452705s)
helpers_test.go:175: Cleaning up "missing-upgrade-112304" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-112304
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-112304: (2.207834507s)
--- PASS: TestMissingContainerUpgrade (152.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-519989 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-519989 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (87.202503ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-519989] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17877
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17877-1145955/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17877-1145955/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (40.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-519989 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-519989 --driver=docker  --container-runtime=crio: (39.88315296s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-519989 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (40.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (13.72s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-519989 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-519989 --no-kubernetes --driver=docker  --container-runtime=crio: (11.372043353s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-519989 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-519989 status -o json: exit status 2 (389.310341ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-519989","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-519989
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-519989: (1.955762117s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (13.72s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (10.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-519989 --no-kubernetes --driver=docker  --container-runtime=crio
E0328 21:55:19.656959 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/addons-564371/client.crt: no such file or directory
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-519989 --no-kubernetes --driver=docker  --container-runtime=crio: (10.692206981s)
--- PASS: TestNoKubernetes/serial/Start (10.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-519989 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-519989 "sudo systemctl is-active --quiet service kubelet": exit status 1 (409.151968ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (5.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-arm64 profile list: (5.361308984s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (5.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-519989
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-519989: (1.276520325s)
--- PASS: TestNoKubernetes/serial/Stop (1.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-519989 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-519989 --driver=docker  --container-runtime=crio: (6.959400225s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-519989 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-519989 "sudo systemctl is-active --quiet service kubelet": exit status 1 (273.786349ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.11s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.11s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (76.46s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.694117586 start -p stopped-upgrade-929212 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.694117586 start -p stopped-upgrade-929212 --memory=2200 --vm-driver=docker  --container-runtime=crio: (37.742585185s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.694117586 -p stopped-upgrade-929212 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.694117586 -p stopped-upgrade-929212 stop: (3.86242591s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-929212 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-929212 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (34.854374734s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (76.46s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.13s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-929212
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-929212: (1.131665988s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.13s)

                                                
                                    
x
+
TestPause/serial/Start (78.41s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-105020 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E0328 22:00:19.656314 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/addons-564371/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-105020 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m18.405192967s)
--- PASS: TestPause/serial/Start (78.41s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (25.02s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-105020 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-105020 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (24.997979324s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (25.02s)

                                                
                                    
x
+
TestPause/serial/Pause (1.06s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-105020 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-105020 --alsologtostderr -v=5: (1.054963511s)
--- PASS: TestPause/serial/Pause (1.06s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.37s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-105020 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-105020 --output=json --layout=cluster: exit status 2 (372.250813ms)

                                                
                                                
-- stdout --
	{"Name":"pause-105020","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.0-beta.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-105020","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.37s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.87s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-105020 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.87s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.26s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-105020 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-105020 --alsologtostderr -v=5: (1.259637557s)
--- PASS: TestPause/serial/PauseAgain (1.26s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.84s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-105020 --alsologtostderr -v=5
E0328 22:01:18.947610 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/functional-351339/client.crt: no such file or directory
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-105020 --alsologtostderr -v=5: (2.84319247s)
--- PASS: TestPause/serial/DeletePaused (2.84s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.51s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-105020
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-105020: exit status 1 (19.908953ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-105020: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-428181 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-428181 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (278.097375ms)

                                                
                                                
-- stdout --
	* [false-428181] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17877
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17877-1145955/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17877-1145955/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0328 22:02:05.998369 1323307 out.go:291] Setting OutFile to fd 1 ...
	I0328 22:02:05.998876 1323307 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 22:02:05.998937 1323307 out.go:304] Setting ErrFile to fd 2...
	I0328 22:02:05.998957 1323307 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 22:02:05.999259 1323307 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17877-1145955/.minikube/bin
	I0328 22:02:05.999879 1323307 out.go:298] Setting JSON to false
	I0328 22:02:06.011872 1323307 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":20676,"bootTime":1711642650,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1056-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0328 22:02:06.012046 1323307 start.go:139] virtualization:  
	I0328 22:02:06.015948 1323307 out.go:177] * [false-428181] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	I0328 22:02:06.018112 1323307 out.go:177]   - MINIKUBE_LOCATION=17877
	I0328 22:02:06.018207 1323307 notify.go:220] Checking for updates...
	I0328 22:02:06.020842 1323307 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 22:02:06.023236 1323307 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17877-1145955/kubeconfig
	I0328 22:02:06.027616 1323307 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17877-1145955/.minikube
	I0328 22:02:06.029686 1323307 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0328 22:02:06.031665 1323307 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 22:02:06.034356 1323307 config.go:182] Loaded profile config "kubernetes-upgrade-037126": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0-beta.0
	I0328 22:02:06.034567 1323307 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 22:02:06.064184 1323307 docker.go:122] docker version: linux-26.0.0:Docker Engine - Community
	I0328 22:02:06.064327 1323307 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0328 22:02:06.162139 1323307 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2024-03-28 22:02:06.1500844 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarch
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors
:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0328 22:02:06.162253 1323307 docker.go:295] overlay module found
	I0328 22:02:06.164487 1323307 out.go:177] * Using the docker driver based on user configuration
	I0328 22:02:06.166530 1323307 start.go:297] selected driver: docker
	I0328 22:02:06.166544 1323307 start.go:901] validating driver "docker" against <nil>
	I0328 22:02:06.166556 1323307 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 22:02:06.169076 1323307 out.go:177] 
	W0328 22:02:06.170991 1323307 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0328 22:02:06.172875 1323307 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-428181 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-428181

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-428181

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-428181

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-428181

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-428181

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-428181

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-428181

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-428181

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-428181

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-428181

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428181"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428181"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428181"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-428181

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428181"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428181"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-428181" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-428181" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-428181" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-428181" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-428181" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-428181" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-428181" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-428181" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428181"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428181"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428181"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428181"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428181"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-428181" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-428181" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-428181" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428181"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428181"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428181"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428181"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428181"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17877-1145955/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 28 Mar 2024 22:01:40 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0-beta.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-037126
contexts:
- context:
cluster: kubernetes-upgrade-037126
extensions:
- extension:
last-update: Thu, 28 Mar 2024 22:01:40 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0-beta.0
name: context_info
namespace: default
user: kubernetes-upgrade-037126
name: kubernetes-upgrade-037126
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-037126
user:
client-certificate: /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/kubernetes-upgrade-037126/client.crt
client-key: /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/kubernetes-upgrade-037126/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-428181

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428181"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428181"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428181"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428181"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428181"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428181"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428181"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428181"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428181"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428181"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428181"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428181"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428181"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428181"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428181"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428181"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428181"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428181"

                                                
                                                
----------------------- debugLogs end: false-428181 [took: 4.262630743s] --------------------------------
helpers_test.go:175: Cleaning up "false-428181" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-428181
--- PASS: TestNetworkPlugins/group/false (4.71s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (142.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-633693 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
E0328 22:05:19.656189 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/addons-564371/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-633693 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m22.39519655s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (142.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.7s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-633693 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [cb11bf23-8a11-44d6-922b-087f2959c6d0] Pending
helpers_test.go:344: "busybox" [cb11bf23-8a11-44d6-922b-087f2959c6d0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [cb11bf23-8a11-44d6-922b-087f2959c6d0] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.007867118s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-633693 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.70s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-633693 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-633693 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.721826531s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-633693 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-633693 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-633693 --alsologtostderr -v=3: (12.192808671s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (70.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-363849 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0-beta.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-363849 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0-beta.0: (1m10.443574513s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (70.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-633693 -n old-k8s-version-633693
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-633693 -n old-k8s-version-633693: exit status 7 (109.939922ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-633693 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-363849 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8b051161-02ce-4a60-bb1c-b4b6b28151a8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [8b051161-02ce-4a60-bb1c-b4b6b28151a8] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004597787s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-363849 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-363849 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-363849 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.100456672s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-363849 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-363849 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-363849 --alsologtostderr -v=3: (12.057319384s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-363849 -n no-preload-363849
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-363849 -n no-preload-363849: exit status 7 (93.559847ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-363849 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (298.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-363849 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0-beta.0
E0328 22:08:15.901391 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/functional-351339/client.crt: no such file or directory
E0328 22:10:02.705413 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/addons-564371/client.crt: no such file or directory
E0328 22:10:19.655818 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/addons-564371/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-363849 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0-beta.0: (4m57.637130626s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-363849 -n no-preload-363849
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (298.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-ckjdn" [f53afeb3-8985-429d-9484-e959ad645671] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00363394s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-fk9zm" [b880c745-6a35-44b4-9d62-2e919451af20] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004317449s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-ckjdn" [f53afeb3-8985-429d-9484-e959ad645671] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005481019s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-633693 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-fk9zm" [b880c745-6a35-44b4-9d62-2e919451af20] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003816646s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-363849 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-633693 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-633693 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-633693 -n old-k8s-version-633693
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-633693 -n old-k8s-version-633693: exit status 2 (341.66879ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-633693 -n old-k8s-version-633693
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-633693 -n old-k8s-version-633693: exit status 2 (316.707247ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-633693 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-633693 -n old-k8s-version-633693
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-633693 -n old-k8s-version-633693
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-363849 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.84s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-363849 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-363849 -n no-preload-363849
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-363849 -n no-preload-363849: exit status 2 (332.736852ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-363849 -n no-preload-363849
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-363849 -n no-preload-363849: exit status 2 (354.035141ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-363849 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p no-preload-363849 --alsologtostderr -v=1: (1.155800929s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-363849 -n no-preload-363849
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-363849 -n no-preload-363849
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.84s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (89.93s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-989661 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-989661 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.3: (1m29.925494989s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (89.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (85.73s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-762328 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.3
E0328 22:13:15.901261 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/functional-351339/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-762328 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.3: (1m25.731000054s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (85.73s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-762328 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [fb6868a2-3c31-4fca-a3ec-2150df14b73f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [fb6868a2-3c31-4fca-a3ec-2150df14b73f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004180111s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-762328 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-989661 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [938c9552-e8b9-4580-bc56-3f2808e13460] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [938c9552-e8b9-4580-bc56-3f2808e13460] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004373092s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-989661 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-989661 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-989661 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.113268689s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-989661 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-762328 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-762328 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.307119828s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-762328 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-989661 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-989661 --alsologtostderr -v=3: (12.021266126s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-762328 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-762328 --alsologtostderr -v=3: (11.940642386s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-989661 -n embed-certs-989661
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-989661 -n embed-certs-989661: exit status 7 (75.93276ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-989661 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (272.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-989661 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-989661 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.3: (4m32.019940214s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-989661 -n embed-certs-989661
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (272.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-762328 -n default-k8s-diff-port-762328
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-762328 -n default-k8s-diff-port-762328: exit status 7 (162.258387ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-762328 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (298.79s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-762328 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.3
E0328 22:15:19.655488 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/addons-564371/client.crt: no such file or directory
E0328 22:15:57.645187 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/old-k8s-version-633693/client.crt: no such file or directory
E0328 22:15:57.650462 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/old-k8s-version-633693/client.crt: no such file or directory
E0328 22:15:57.660737 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/old-k8s-version-633693/client.crt: no such file or directory
E0328 22:15:57.680987 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/old-k8s-version-633693/client.crt: no such file or directory
E0328 22:15:57.721231 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/old-k8s-version-633693/client.crt: no such file or directory
E0328 22:15:57.801524 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/old-k8s-version-633693/client.crt: no such file or directory
E0328 22:15:57.961907 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/old-k8s-version-633693/client.crt: no such file or directory
E0328 22:15:58.282086 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/old-k8s-version-633693/client.crt: no such file or directory
E0328 22:15:58.922374 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/old-k8s-version-633693/client.crt: no such file or directory
E0328 22:16:00.204539 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/old-k8s-version-633693/client.crt: no such file or directory
E0328 22:16:02.764732 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/old-k8s-version-633693/client.crt: no such file or directory
E0328 22:16:07.885302 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/old-k8s-version-633693/client.crt: no such file or directory
E0328 22:16:18.126387 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/old-k8s-version-633693/client.crt: no such file or directory
E0328 22:16:38.606916 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/old-k8s-version-633693/client.crt: no such file or directory
E0328 22:17:19.567148 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/old-k8s-version-633693/client.crt: no such file or directory
E0328 22:17:25.880237 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/no-preload-363849/client.crt: no such file or directory
E0328 22:17:25.885675 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/no-preload-363849/client.crt: no such file or directory
E0328 22:17:25.895921 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/no-preload-363849/client.crt: no such file or directory
E0328 22:17:25.916337 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/no-preload-363849/client.crt: no such file or directory
E0328 22:17:25.956509 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/no-preload-363849/client.crt: no such file or directory
E0328 22:17:26.037521 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/no-preload-363849/client.crt: no such file or directory
E0328 22:17:26.197706 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/no-preload-363849/client.crt: no such file or directory
E0328 22:17:26.518238 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/no-preload-363849/client.crt: no such file or directory
E0328 22:17:27.158379 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/no-preload-363849/client.crt: no such file or directory
E0328 22:17:28.438579 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/no-preload-363849/client.crt: no such file or directory
E0328 22:17:30.999419 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/no-preload-363849/client.crt: no such file or directory
E0328 22:17:36.119894 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/no-preload-363849/client.crt: no such file or directory
E0328 22:17:46.361000 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/no-preload-363849/client.crt: no such file or directory
E0328 22:17:58.948602 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/functional-351339/client.crt: no such file or directory
E0328 22:18:06.841894 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/no-preload-363849/client.crt: no such file or directory
E0328 22:18:15.901277 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/functional-351339/client.crt: no such file or directory
E0328 22:18:41.487351 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/old-k8s-version-633693/client.crt: no such file or directory
E0328 22:18:47.802593 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/no-preload-363849/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-762328 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.3: (4m58.228966063s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-762328 -n default-k8s-diff-port-762328
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (298.79s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-ts492" [fe76ac6f-c7d9-4148-9b77-b070c3e4758d] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003910333s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-ts492" [fe76ac6f-c7d9-4148-9b77-b070c3e4758d] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004540253s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-989661 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-989661 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.63s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-989661 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-989661 -n embed-certs-989661
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-989661 -n embed-certs-989661: exit status 2 (362.784204ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-989661 -n embed-certs-989661
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-989661 -n embed-certs-989661: exit status 2 (318.275254ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-989661 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-989661 -n embed-certs-989661
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-989661 -n embed-certs-989661
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.63s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (46.92s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-096005 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0-beta.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-096005 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0-beta.0: (46.922796143s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (46.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-khj7c" [f520206f-aae5-4b6b-b5b5-0be155241426] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.026682219s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-khj7c" [f520206f-aae5-4b6b-b5b5-0be155241426] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005950771s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-762328 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-762328 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-762328 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-762328 --alsologtostderr -v=1: (1.065157271s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-762328 -n default-k8s-diff-port-762328
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-762328 -n default-k8s-diff-port-762328: exit status 2 (384.876105ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-762328 -n default-k8s-diff-port-762328
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-762328 -n default-k8s-diff-port-762328: exit status 2 (366.845347ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-762328 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p default-k8s-diff-port-762328 --alsologtostderr -v=1: (1.037104745s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-762328 -n default-k8s-diff-port-762328
E0328 22:20:09.722746 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/no-preload-363849/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-762328 -n default-k8s-diff-port-762328
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (86.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-428181 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E0328 22:20:19.656168 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/addons-564371/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-428181 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m26.16772833s)
--- PASS: TestNetworkPlugins/group/auto/Start (86.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-096005 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-096005 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.235217195s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-096005 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-096005 --alsologtostderr -v=3: (1.289964807s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-096005 -n newest-cni-096005
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-096005 -n newest-cni-096005: exit status 7 (101.550472ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-096005 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (20.73s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-096005 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0-beta.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-096005 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0-beta.0: (20.297650936s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-096005 -n newest-cni-096005
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (20.73s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-096005 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.95s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-096005 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-096005 -n newest-cni-096005
E0328 22:20:57.644840 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/old-k8s-version-633693/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-096005 -n newest-cni-096005: exit status 2 (336.579553ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-096005 -n newest-cni-096005
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-096005 -n newest-cni-096005: exit status 2 (345.882367ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-096005 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-096005 -n newest-cni-096005
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-096005 -n newest-cni-096005
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.95s)
E0328 22:27:01.523164 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/auto-428181/client.crt: no such file or directory
E0328 22:27:15.279441 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/default-k8s-diff-port-762328/client.crt: no such file or directory
E0328 22:27:22.005284 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/auto-428181/client.crt: no such file or directory
E0328 22:27:25.877755 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/no-preload-363849/client.crt: no such file or directory
E0328 22:27:27.896610 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/kindnet-428181/client.crt: no such file or directory
E0328 22:27:27.901887 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/kindnet-428181/client.crt: no such file or directory
E0328 22:27:27.912158 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/kindnet-428181/client.crt: no such file or directory
E0328 22:27:27.932467 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/kindnet-428181/client.crt: no such file or directory
E0328 22:27:27.972796 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/kindnet-428181/client.crt: no such file or directory
E0328 22:27:28.053058 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/kindnet-428181/client.crt: no such file or directory
E0328 22:27:28.213521 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/kindnet-428181/client.crt: no such file or directory
E0328 22:27:28.534069 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/kindnet-428181/client.crt: no such file or directory
E0328 22:27:29.174621 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/kindnet-428181/client.crt: no such file or directory
E0328 22:27:30.454869 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/kindnet-428181/client.crt: no such file or directory
E0328 22:27:33.015696 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/kindnet-428181/client.crt: no such file or directory
E0328 22:27:38.136819 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/kindnet-428181/client.crt: no such file or directory
E0328 22:27:48.377423 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/kindnet-428181/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (85.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-428181 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E0328 22:21:25.327999 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/old-k8s-version-633693/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-428181 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m25.740482797s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (85.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-428181 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-428181 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-wvsk7" [2a01864f-ecd4-4134-bb1f-48f636521271] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-wvsk7" [2a01864f-ecd4-4134-bb1f-48f636521271] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.005417593s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-428181 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-428181 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-428181 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (75.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-428181 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E0328 22:22:25.877667 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/no-preload-363849/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-428181 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m15.707953876s)
--- PASS: TestNetworkPlugins/group/calico/Start (75.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-zsq9x" [08eb2ee2-9936-416a-9e66-c808ddde3cb8] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005270954s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-428181 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-428181 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-555tc" [2ddc3352-4e9f-410f-a032-91614f1cbc79] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-555tc" [2ddc3352-4e9f-410f-a032-91614f1cbc79] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.004052387s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-428181 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-428181 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-428181 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (71.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-428181 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E0328 22:23:15.901521 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/functional-351339/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-428181 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m11.036527589s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (71.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-4g7r6" [0a6128e3-5469-461e-948b-daf32216cddb] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.008256947s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-428181 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-428181 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-jsg2w" [a4a022df-2ba9-4311-807c-06624ec9d99f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-jsg2w" [a4a022df-2ba9-4311-807c-06624ec9d99f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.004926264s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-428181 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-428181 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-428181 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (94.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-428181 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-428181 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m34.338062688s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (94.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-428181 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-428181 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-grxmp" [69abbd1b-2552-46a3-9db9-21b9bc32dbe9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-grxmp" [69abbd1b-2552-46a3-9db9-21b9bc32dbe9] Running
E0328 22:24:31.437498 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/default-k8s-diff-port-762328/client.crt: no such file or directory
E0328 22:24:31.442598 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/default-k8s-diff-port-762328/client.crt: no such file or directory
E0328 22:24:31.452857 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/default-k8s-diff-port-762328/client.crt: no such file or directory
E0328 22:24:31.473131 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/default-k8s-diff-port-762328/client.crt: no such file or directory
E0328 22:24:31.513399 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/default-k8s-diff-port-762328/client.crt: no such file or directory
E0328 22:24:31.593660 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/default-k8s-diff-port-762328/client.crt: no such file or directory
E0328 22:24:31.754424 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/default-k8s-diff-port-762328/client.crt: no such file or directory
E0328 22:24:32.075136 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/default-k8s-diff-port-762328/client.crt: no such file or directory
E0328 22:24:32.715428 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/default-k8s-diff-port-762328/client.crt: no such file or directory
E0328 22:24:33.996267 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/default-k8s-diff-port-762328/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.004848839s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-428181 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-428181 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-428181 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (71.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-428181 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E0328 22:25:12.397948 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/default-k8s-diff-port-762328/client.crt: no such file or directory
E0328 22:25:19.656152 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/addons-564371/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-428181 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m11.120718203s)
--- PASS: TestNetworkPlugins/group/flannel/Start (71.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-428181 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-428181 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-c8l8z" [67f547c2-eef3-4e17-9af1-95d14f66a2a4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0328 22:25:53.359037 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/default-k8s-diff-port-762328/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-c8l8z" [67f547c2-eef3-4e17-9af1-95d14f66a2a4] Running
E0328 22:25:57.644989 1151363 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/old-k8s-version-633693/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.003634798s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-428181 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-428181 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-428181 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-vz6zn" [705fe965-a870-48c8-9ec4-afebced7279c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004696541s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-428181 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-428181 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-9xqwd" [820c300c-c784-4187-8f2d-b78156ab8fe3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-9xqwd" [820c300c-c784-4187-8f2d-b78156ab8fe3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.008205872s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (86.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-428181 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-428181 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m26.974966714s)
--- PASS: TestNetworkPlugins/group/bridge/Start (86.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-428181 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-428181 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-428181 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-428181 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-428181 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-4lb6x" [f002615b-c470-4270-b535-2f0ea21e3409] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-4lb6x" [f002615b-c470-4270-b535-2f0ea21e3409] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.00423272s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-428181 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-428181 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-428181 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                    

Test skip (32/335)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.56s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-434763 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-434763" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-434763
--- SKIP: TestDownloadOnlyKic (0.56s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:444: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-781969" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-781969
--- SKIP: TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-428181 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-428181

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-428181

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-428181

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-428181

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-428181

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-428181

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-428181

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-428181

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-428181

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-428181

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428181"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428181"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428181"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-428181

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428181"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428181"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-428181" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-428181" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-428181" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-428181" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-428181" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-428181" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-428181" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-428181" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428181"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428181"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428181"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428181"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428181"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-428181" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-428181" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-428181" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428181"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428181"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428181"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428181"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428181"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17877-1145955/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 28 Mar 2024 22:01:40 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0-beta.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-037126
contexts:
- context:
cluster: kubernetes-upgrade-037126
extensions:
- extension:
last-update: Thu, 28 Mar 2024 22:01:40 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0-beta.0
name: context_info
namespace: default
user: kubernetes-upgrade-037126
name: kubernetes-upgrade-037126
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-037126
user:
client-certificate: /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/kubernetes-upgrade-037126/client.crt
client-key: /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/kubernetes-upgrade-037126/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-428181

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428181"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428181"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428181"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428181"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428181"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428181"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428181"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428181"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428181"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428181"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428181"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428181"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428181"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428181"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428181"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428181"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428181"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428181"

                                                
                                                
----------------------- debugLogs end: kubenet-428181 [took: 4.396051387s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-428181" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-428181
--- SKIP: TestNetworkPlugins/group/kubenet (4.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-428181 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-428181

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-428181

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-428181

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-428181

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-428181

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-428181

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-428181

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-428181

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-428181

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-428181

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428181"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428181"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428181"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-428181

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428181"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428181"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-428181" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-428181" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-428181" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-428181" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-428181" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-428181" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-428181" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-428181" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428181"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428181"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428181"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428181"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428181"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-428181

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-428181

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-428181" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-428181" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-428181

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-428181

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-428181" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-428181" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-428181" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-428181" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-428181" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428181"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428181"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428181"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428181"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428181"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17877-1145955/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 28 Mar 2024 22:01:40 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0-beta.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-037126
contexts:
- context:
cluster: kubernetes-upgrade-037126
extensions:
- extension:
last-update: Thu, 28 Mar 2024 22:01:40 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0-beta.0
name: context_info
namespace: default
user: kubernetes-upgrade-037126
name: kubernetes-upgrade-037126
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-037126
user:
client-certificate: /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/kubernetes-upgrade-037126/client.crt
client-key: /home/jenkins/minikube-integration/17877-1145955/.minikube/profiles/kubernetes-upgrade-037126/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-428181

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428181"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428181"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428181"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428181"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428181"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428181"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428181"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428181"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428181"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428181"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428181"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428181"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428181"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428181"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428181"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428181"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428181"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-428181" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428181"

                                                
                                                
----------------------- debugLogs end: cilium-428181 [took: 4.959360993s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-428181" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-428181
--- SKIP: TestNetworkPlugins/group/cilium (5.18s)

                                                
                                    
Copied to clipboard