Test Report: Docker_Linux_crio_arm64 18756

                    
                      159c0885aec790b0bc18754712c4d2a4038767fb:2024-04-29:34251
                    
                

Test fail (3/327)

Order failed test Duration
30 TestAddons/parallel/Ingress 168.12
32 TestAddons/parallel/MetricsServer 325.67
301 TestStartStop/group/old-k8s-version/serial/SecondStart 377.37
x
+
TestAddons/parallel/Ingress (168.12s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-760922 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-760922 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-760922 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [ed4d48a3-2bfd-49d8-a830-99d9a45f8f4c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [ed4d48a3-2bfd-49d8-a830-99d9a45f8f4c] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003178318s
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-760922 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-760922 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.407633862s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-760922 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-760922 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:297: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.061688865s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:299: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:303: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:306: (dbg) Run:  out/minikube-linux-arm64 -p addons-760922 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-arm64 -p addons-760922 addons disable ingress-dns --alsologtostderr -v=1: (1.157358833s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-arm64 -p addons-760922 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-arm64 -p addons-760922 addons disable ingress --alsologtostderr -v=1: (7.728905495s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-760922
helpers_test.go:235: (dbg) docker inspect addons-760922:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "acf70231910d5cd4ee4713c17db090abfc96713bb9ed8bf6949a750ff680fdbf",
	        "Created": "2024-04-29T11:34:28.033467642Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1238069,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-04-29T11:34:28.328910908Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c9315e0f61546d7b9630cf89252fa7f614fc966830e816cca5333df5c944376f",
	        "ResolvConfPath": "/var/lib/docker/containers/acf70231910d5cd4ee4713c17db090abfc96713bb9ed8bf6949a750ff680fdbf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/acf70231910d5cd4ee4713c17db090abfc96713bb9ed8bf6949a750ff680fdbf/hostname",
	        "HostsPath": "/var/lib/docker/containers/acf70231910d5cd4ee4713c17db090abfc96713bb9ed8bf6949a750ff680fdbf/hosts",
	        "LogPath": "/var/lib/docker/containers/acf70231910d5cd4ee4713c17db090abfc96713bb9ed8bf6949a750ff680fdbf/acf70231910d5cd4ee4713c17db090abfc96713bb9ed8bf6949a750ff680fdbf-json.log",
	        "Name": "/addons-760922",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-760922:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-760922",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/832d143a181c8a5d725e9a1a80315a6f7477608759df936cc5c4f8d624673f4d-init/diff:/var/lib/docker/overlay2/99267fe96688a6fee0a92469b55a9da51d73214dc11fc371bf5149dbc069c731/diff",
	                "MergedDir": "/var/lib/docker/overlay2/832d143a181c8a5d725e9a1a80315a6f7477608759df936cc5c4f8d624673f4d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/832d143a181c8a5d725e9a1a80315a6f7477608759df936cc5c4f8d624673f4d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/832d143a181c8a5d725e9a1a80315a6f7477608759df936cc5c4f8d624673f4d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-760922",
	                "Source": "/var/lib/docker/volumes/addons-760922/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-760922",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-760922",
	                "name.minikube.sigs.k8s.io": "addons-760922",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f107ebcfb47a9ccc41c58d28aacc2d32162c73103ef133439caf0386f289eac8",
	            "SandboxKey": "/var/run/docker/netns/f107ebcfb47a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34278"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34277"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34274"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34276"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34275"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-760922": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "aa18007d61fa281b0d63804776f16a7a9362ec2b322dcf76ee719f08f8b5b429",
	                    "EndpointID": "eac6c9ff872b0265ff91315e23dd5f90d8d58340ae76bc3113f89b2a6f110287",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "addons-760922",
	                        "acf70231910d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-760922 -n addons-760922
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-760922 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-760922 logs -n 25: (1.465131923s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube               | jenkins | v1.33.0 | 29 Apr 24 11:34 UTC | 29 Apr 24 11:34 UTC |
	| delete  | -p download-only-665613                                                                     | download-only-665613   | jenkins | v1.33.0 | 29 Apr 24 11:34 UTC | 29 Apr 24 11:34 UTC |
	| delete  | -p download-only-895081                                                                     | download-only-895081   | jenkins | v1.33.0 | 29 Apr 24 11:34 UTC | 29 Apr 24 11:34 UTC |
	| delete  | -p download-only-665613                                                                     | download-only-665613   | jenkins | v1.33.0 | 29 Apr 24 11:34 UTC | 29 Apr 24 11:34 UTC |
	| start   | --download-only -p                                                                          | download-docker-209390 | jenkins | v1.33.0 | 29 Apr 24 11:34 UTC |                     |
	|         | download-docker-209390                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-209390                                                                   | download-docker-209390 | jenkins | v1.33.0 | 29 Apr 24 11:34 UTC | 29 Apr 24 11:34 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-725376   | jenkins | v1.33.0 | 29 Apr 24 11:34 UTC |                     |
	|         | binary-mirror-725376                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:45633                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-725376                                                                     | binary-mirror-725376   | jenkins | v1.33.0 | 29 Apr 24 11:34 UTC | 29 Apr 24 11:34 UTC |
	| addons  | enable dashboard -p                                                                         | addons-760922          | jenkins | v1.33.0 | 29 Apr 24 11:34 UTC |                     |
	|         | addons-760922                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-760922          | jenkins | v1.33.0 | 29 Apr 24 11:34 UTC |                     |
	|         | addons-760922                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-760922 --wait=true                                                                | addons-760922          | jenkins | v1.33.0 | 29 Apr 24 11:34 UTC | 29 Apr 24 11:37 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --driver=docker                                                               |                        |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| ip      | addons-760922 ip                                                                            | addons-760922          | jenkins | v1.33.0 | 29 Apr 24 11:37 UTC | 29 Apr 24 11:37 UTC |
	| addons  | addons-760922 addons disable                                                                | addons-760922          | jenkins | v1.33.0 | 29 Apr 24 11:37 UTC | 29 Apr 24 11:37 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-760922          | jenkins | v1.33.0 | 29 Apr 24 11:37 UTC | 29 Apr 24 11:38 UTC |
	|         | -p addons-760922                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-760922 ssh cat                                                                       | addons-760922          | jenkins | v1.33.0 | 29 Apr 24 11:38 UTC | 29 Apr 24 11:38 UTC |
	|         | /opt/local-path-provisioner/pvc-bb91cfb2-1bc0-483a-82bf-c8a42280a852_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-760922 addons disable                                                                | addons-760922          | jenkins | v1.33.0 | 29 Apr 24 11:38 UTC | 29 Apr 24 11:38 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-760922 addons                                                                        | addons-760922          | jenkins | v1.33.0 | 29 Apr 24 11:38 UTC | 29 Apr 24 11:38 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-760922          | jenkins | v1.33.0 | 29 Apr 24 11:38 UTC | 29 Apr 24 11:38 UTC |
	|         | addons-760922                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-760922          | jenkins | v1.33.0 | 29 Apr 24 11:38 UTC | 29 Apr 24 11:38 UTC |
	|         | -p addons-760922                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-760922 addons                                                                        | addons-760922          | jenkins | v1.33.0 | 29 Apr 24 11:38 UTC | 29 Apr 24 11:38 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-760922          | jenkins | v1.33.0 | 29 Apr 24 11:38 UTC | 29 Apr 24 11:38 UTC |
	|         | addons-760922                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-760922 ssh curl -s                                                                   | addons-760922          | jenkins | v1.33.0 | 29 Apr 24 11:38 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-760922 ip                                                                            | addons-760922          | jenkins | v1.33.0 | 29 Apr 24 11:41 UTC | 29 Apr 24 11:41 UTC |
	| addons  | addons-760922 addons disable                                                                | addons-760922          | jenkins | v1.33.0 | 29 Apr 24 11:41 UTC | 29 Apr 24 11:41 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-760922 addons disable                                                                | addons-760922          | jenkins | v1.33.0 | 29 Apr 24 11:41 UTC | 29 Apr 24 11:41 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 11:34:04
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 11:34:04.652540 1237617 out.go:291] Setting OutFile to fd 1 ...
	I0429 11:34:04.652706 1237617 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 11:34:04.652716 1237617 out.go:304] Setting ErrFile to fd 2...
	I0429 11:34:04.652721 1237617 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 11:34:04.652955 1237617 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18756-1231546/.minikube/bin
	I0429 11:34:04.653407 1237617 out.go:298] Setting JSON to false
	I0429 11:34:04.654299 1237617 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":26189,"bootTime":1714364256,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0429 11:34:04.654373 1237617 start.go:139] virtualization:  
	I0429 11:34:04.657390 1237617 out.go:177] * [addons-760922] minikube v1.33.0 on Ubuntu 20.04 (arm64)
	I0429 11:34:04.661168 1237617 out.go:177]   - MINIKUBE_LOCATION=18756
	I0429 11:34:04.663249 1237617 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 11:34:04.661284 1237617 notify.go:220] Checking for updates...
	I0429 11:34:04.667216 1237617 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18756-1231546/kubeconfig
	I0429 11:34:04.669479 1237617 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18756-1231546/.minikube
	I0429 11:34:04.671454 1237617 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0429 11:34:04.674266 1237617 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 11:34:04.676642 1237617 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 11:34:04.699144 1237617 docker.go:122] docker version: linux-26.1.0:Docker Engine - Community
	I0429 11:34:04.699270 1237617 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0429 11:34:04.760727 1237617 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-04-29 11:34:04.75150628 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.1.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0429 11:34:04.760839 1237617 docker.go:295] overlay module found
	I0429 11:34:04.763013 1237617 out.go:177] * Using the docker driver based on user configuration
	I0429 11:34:04.764800 1237617 start.go:297] selected driver: docker
	I0429 11:34:04.764813 1237617 start.go:901] validating driver "docker" against <nil>
	I0429 11:34:04.764827 1237617 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 11:34:04.765472 1237617 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0429 11:34:04.816941 1237617 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-04-29 11:34:04.808138598 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.1.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0429 11:34:04.817110 1237617 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 11:34:04.817346 1237617 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 11:34:04.819030 1237617 out.go:177] * Using Docker driver with root privileges
	I0429 11:34:04.821064 1237617 cni.go:84] Creating CNI manager for ""
	I0429 11:34:04.821090 1237617 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0429 11:34:04.821099 1237617 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0429 11:34:04.821192 1237617 start.go:340] cluster config:
	{Name:addons-760922 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-760922 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 11:34:04.823276 1237617 out.go:177] * Starting "addons-760922" primary control-plane node in "addons-760922" cluster
	I0429 11:34:04.825298 1237617 cache.go:121] Beginning downloading kic base image for docker with crio
	I0429 11:34:04.827269 1237617 out.go:177] * Pulling base image v0.0.43-1713736339-18706 ...
	I0429 11:34:04.829413 1237617 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 11:34:04.829446 1237617 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon
	I0429 11:34:04.829474 1237617 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18756-1231546/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-arm64.tar.lz4
	I0429 11:34:04.829484 1237617 cache.go:56] Caching tarball of preloaded images
	I0429 11:34:04.829573 1237617 preload.go:173] Found /home/jenkins/minikube-integration/18756-1231546/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0429 11:34:04.829588 1237617 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0429 11:34:04.829947 1237617 profile.go:143] Saving config to /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/config.json ...
	I0429 11:34:04.829968 1237617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/config.json: {Name:mk8ff81118efc3ea2062fe7790d26ea20ad501d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:34:04.843100 1237617 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e to local cache
	I0429 11:34:04.843219 1237617 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local cache directory
	I0429 11:34:04.843245 1237617 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local cache directory, skipping pull
	I0429 11:34:04.843254 1237617 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e exists in cache, skipping pull
	I0429 11:34:04.843262 1237617 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e as a tarball
	I0429 11:34:04.843272 1237617 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e from local cache
	I0429 11:34:21.280204 1237617 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e from cached tarball
	I0429 11:34:21.280239 1237617 cache.go:194] Successfully downloaded all kic artifacts
	I0429 11:34:21.280268 1237617 start.go:360] acquireMachinesLock for addons-760922: {Name:mk795d68e2ddd6b7e26da53c29b36b6339fa2857 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 11:34:21.280388 1237617 start.go:364] duration metric: took 97.813µs to acquireMachinesLock for "addons-760922"
	I0429 11:34:21.280420 1237617 start.go:93] Provisioning new machine with config: &{Name:addons-760922 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-760922 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 11:34:21.280522 1237617 start.go:125] createHost starting for "" (driver="docker")
	I0429 11:34:21.282901 1237617 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0429 11:34:21.283138 1237617 start.go:159] libmachine.API.Create for "addons-760922" (driver="docker")
	I0429 11:34:21.283178 1237617 client.go:168] LocalClient.Create starting
	I0429 11:34:21.283318 1237617 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18756-1231546/.minikube/certs/ca.pem
	I0429 11:34:21.554507 1237617 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18756-1231546/.minikube/certs/cert.pem
	I0429 11:34:21.746515 1237617 cli_runner.go:164] Run: docker network inspect addons-760922 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0429 11:34:21.762560 1237617 cli_runner.go:211] docker network inspect addons-760922 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0429 11:34:21.762659 1237617 network_create.go:281] running [docker network inspect addons-760922] to gather additional debugging logs...
	I0429 11:34:21.762686 1237617 cli_runner.go:164] Run: docker network inspect addons-760922
	W0429 11:34:21.780341 1237617 cli_runner.go:211] docker network inspect addons-760922 returned with exit code 1
	I0429 11:34:21.780374 1237617 network_create.go:284] error running [docker network inspect addons-760922]: docker network inspect addons-760922: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-760922 not found
	I0429 11:34:21.780399 1237617 network_create.go:286] output of [docker network inspect addons-760922]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-760922 not found
	
	** /stderr **
	I0429 11:34:21.780493 1237617 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0429 11:34:21.797382 1237617 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400277eb60}
	I0429 11:34:21.797421 1237617 network_create.go:124] attempt to create docker network addons-760922 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0429 11:34:21.797477 1237617 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-760922 addons-760922
	I0429 11:34:21.861693 1237617 network_create.go:108] docker network addons-760922 192.168.49.0/24 created
	I0429 11:34:21.861725 1237617 kic.go:121] calculated static IP "192.168.49.2" for the "addons-760922" container
	I0429 11:34:21.861817 1237617 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0429 11:34:21.876333 1237617 cli_runner.go:164] Run: docker volume create addons-760922 --label name.minikube.sigs.k8s.io=addons-760922 --label created_by.minikube.sigs.k8s.io=true
	I0429 11:34:21.892395 1237617 oci.go:103] Successfully created a docker volume addons-760922
	I0429 11:34:21.892476 1237617 cli_runner.go:164] Run: docker run --rm --name addons-760922-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-760922 --entrypoint /usr/bin/test -v addons-760922:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -d /var/lib
	I0429 11:34:23.844245 1237617 cli_runner.go:217] Completed: docker run --rm --name addons-760922-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-760922 --entrypoint /usr/bin/test -v addons-760922:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -d /var/lib: (1.951720717s)
	I0429 11:34:23.844279 1237617 oci.go:107] Successfully prepared a docker volume addons-760922
	I0429 11:34:23.844306 1237617 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 11:34:23.844325 1237617 kic.go:194] Starting extracting preloaded images to volume ...
	I0429 11:34:23.844410 1237617 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18756-1231546/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-760922:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -I lz4 -xf /preloaded.tar -C /extractDir
	I0429 11:34:27.964526 1237617 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18756-1231546/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-760922:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -I lz4 -xf /preloaded.tar -C /extractDir: (4.120073957s)
	I0429 11:34:27.964562 1237617 kic.go:203] duration metric: took 4.120233883s to extract preloaded images to volume ...
	W0429 11:34:27.964716 1237617 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0429 11:34:27.964832 1237617 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0429 11:34:28.019282 1237617 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-760922 --name addons-760922 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-760922 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-760922 --network addons-760922 --ip 192.168.49.2 --volume addons-760922:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e
	I0429 11:34:28.336633 1237617 cli_runner.go:164] Run: docker container inspect addons-760922 --format={{.State.Running}}
	I0429 11:34:28.361288 1237617 cli_runner.go:164] Run: docker container inspect addons-760922 --format={{.State.Status}}
	I0429 11:34:28.384444 1237617 cli_runner.go:164] Run: docker exec addons-760922 stat /var/lib/dpkg/alternatives/iptables
	I0429 11:34:28.455814 1237617 oci.go:144] the created container "addons-760922" has a running status.
	I0429 11:34:28.455846 1237617 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18756-1231546/.minikube/machines/addons-760922/id_rsa...
	I0429 11:34:28.925821 1237617 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18756-1231546/.minikube/machines/addons-760922/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0429 11:34:28.955193 1237617 cli_runner.go:164] Run: docker container inspect addons-760922 --format={{.State.Status}}
	I0429 11:34:28.978252 1237617 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0429 11:34:28.978272 1237617 kic_runner.go:114] Args: [docker exec --privileged addons-760922 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0429 11:34:29.043716 1237617 cli_runner.go:164] Run: docker container inspect addons-760922 --format={{.State.Status}}
	I0429 11:34:29.069194 1237617 machine.go:94] provisionDockerMachine start ...
	I0429 11:34:29.069283 1237617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-760922
	I0429 11:34:29.091374 1237617 main.go:141] libmachine: Using SSH client type: native
	I0429 11:34:29.091636 1237617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 34278 <nil> <nil>}
	I0429 11:34:29.091645 1237617 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 11:34:29.240258 1237617 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-760922
	
	I0429 11:34:29.240321 1237617 ubuntu.go:169] provisioning hostname "addons-760922"
	I0429 11:34:29.240417 1237617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-760922
	I0429 11:34:29.257735 1237617 main.go:141] libmachine: Using SSH client type: native
	I0429 11:34:29.257972 1237617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 34278 <nil> <nil>}
	I0429 11:34:29.257983 1237617 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-760922 && echo "addons-760922" | sudo tee /etc/hostname
	I0429 11:34:29.407508 1237617 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-760922
	
	I0429 11:34:29.407667 1237617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-760922
	I0429 11:34:29.423856 1237617 main.go:141] libmachine: Using SSH client type: native
	I0429 11:34:29.424100 1237617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 34278 <nil> <nil>}
	I0429 11:34:29.424116 1237617 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-760922' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-760922/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-760922' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 11:34:29.548876 1237617 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 11:34:29.548902 1237617 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18756-1231546/.minikube CaCertPath:/home/jenkins/minikube-integration/18756-1231546/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18756-1231546/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18756-1231546/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18756-1231546/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18756-1231546/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18756-1231546/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18756-1231546/.minikube}
	I0429 11:34:29.548944 1237617 ubuntu.go:177] setting up certificates
	I0429 11:34:29.548959 1237617 provision.go:84] configureAuth start
	I0429 11:34:29.549025 1237617 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-760922
	I0429 11:34:29.565188 1237617 provision.go:143] copyHostCerts
	I0429 11:34:29.565272 1237617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18756-1231546/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18756-1231546/.minikube/ca.pem (1082 bytes)
	I0429 11:34:29.565398 1237617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18756-1231546/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18756-1231546/.minikube/cert.pem (1123 bytes)
	I0429 11:34:29.565468 1237617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18756-1231546/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18756-1231546/.minikube/key.pem (1675 bytes)
	I0429 11:34:29.565524 1237617 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18756-1231546/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18756-1231546/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18756-1231546/.minikube/certs/ca-key.pem org=jenkins.addons-760922 san=[127.0.0.1 192.168.49.2 addons-760922 localhost minikube]
	I0429 11:34:30.051555 1237617 provision.go:177] copyRemoteCerts
	I0429 11:34:30.051632 1237617 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 11:34:30.051678 1237617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-760922
	I0429 11:34:30.072945 1237617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34278 SSHKeyPath:/home/jenkins/minikube-integration/18756-1231546/.minikube/machines/addons-760922/id_rsa Username:docker}
	I0429 11:34:30.166306 1237617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18756-1231546/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0429 11:34:30.193667 1237617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18756-1231546/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0429 11:34:30.220123 1237617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18756-1231546/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0429 11:34:30.244870 1237617 provision.go:87] duration metric: took 695.894789ms to configureAuth
	I0429 11:34:30.244896 1237617 ubuntu.go:193] setting minikube options for container-runtime
	I0429 11:34:30.245115 1237617 config.go:182] Loaded profile config "addons-760922": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 11:34:30.245238 1237617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-760922
	I0429 11:34:30.263882 1237617 main.go:141] libmachine: Using SSH client type: native
	I0429 11:34:30.264129 1237617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 34278 <nil> <nil>}
	I0429 11:34:30.264149 1237617 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 11:34:30.485874 1237617 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 11:34:30.485937 1237617 machine.go:97] duration metric: took 1.416722728s to provisionDockerMachine
	I0429 11:34:30.485962 1237617 client.go:171] duration metric: took 9.202772221s to LocalClient.Create
	I0429 11:34:30.486010 1237617 start.go:167] duration metric: took 9.202860705s to libmachine.API.Create "addons-760922"
	I0429 11:34:30.486036 1237617 start.go:293] postStartSetup for "addons-760922" (driver="docker")
	I0429 11:34:30.486059 1237617 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 11:34:30.486170 1237617 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 11:34:30.486258 1237617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-760922
	I0429 11:34:30.507746 1237617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34278 SSHKeyPath:/home/jenkins/minikube-integration/18756-1231546/.minikube/machines/addons-760922/id_rsa Username:docker}
	I0429 11:34:30.598372 1237617 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 11:34:30.601690 1237617 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0429 11:34:30.601726 1237617 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0429 11:34:30.601737 1237617 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0429 11:34:30.601744 1237617 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0429 11:34:30.601761 1237617 filesync.go:126] Scanning /home/jenkins/minikube-integration/18756-1231546/.minikube/addons for local assets ...
	I0429 11:34:30.601835 1237617 filesync.go:126] Scanning /home/jenkins/minikube-integration/18756-1231546/.minikube/files for local assets ...
	I0429 11:34:30.601860 1237617 start.go:296] duration metric: took 115.806029ms for postStartSetup
	I0429 11:34:30.602185 1237617 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-760922
	I0429 11:34:30.618381 1237617 profile.go:143] Saving config to /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/config.json ...
	I0429 11:34:30.618686 1237617 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 11:34:30.618749 1237617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-760922
	I0429 11:34:30.635417 1237617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34278 SSHKeyPath:/home/jenkins/minikube-integration/18756-1231546/.minikube/machines/addons-760922/id_rsa Username:docker}
	I0429 11:34:30.721528 1237617 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0429 11:34:30.726011 1237617 start.go:128] duration metric: took 9.445474322s to createHost
	I0429 11:34:30.726035 1237617 start.go:83] releasing machines lock for "addons-760922", held for 9.445632763s
	I0429 11:34:30.726117 1237617 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-760922
	I0429 11:34:30.741572 1237617 ssh_runner.go:195] Run: cat /version.json
	I0429 11:34:30.741615 1237617 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 11:34:30.741624 1237617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-760922
	I0429 11:34:30.741666 1237617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-760922
	I0429 11:34:30.758178 1237617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34278 SSHKeyPath:/home/jenkins/minikube-integration/18756-1231546/.minikube/machines/addons-760922/id_rsa Username:docker}
	I0429 11:34:30.768890 1237617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34278 SSHKeyPath:/home/jenkins/minikube-integration/18756-1231546/.minikube/machines/addons-760922/id_rsa Username:docker}
	I0429 11:34:30.844164 1237617 ssh_runner.go:195] Run: systemctl --version
	I0429 11:34:30.958780 1237617 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 11:34:31.099197 1237617 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0429 11:34:31.103700 1237617 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 11:34:31.125572 1237617 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0429 11:34:31.125669 1237617 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 11:34:31.162093 1237617 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0429 11:34:31.162114 1237617 start.go:494] detecting cgroup driver to use...
	I0429 11:34:31.162147 1237617 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0429 11:34:31.162193 1237617 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 11:34:31.179624 1237617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 11:34:31.191562 1237617 docker.go:217] disabling cri-docker service (if available) ...
	I0429 11:34:31.191622 1237617 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 11:34:31.206909 1237617 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 11:34:31.222116 1237617 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 11:34:31.312293 1237617 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 11:34:31.400129 1237617 docker.go:233] disabling docker service ...
	I0429 11:34:31.400196 1237617 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 11:34:31.420939 1237617 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 11:34:31.432411 1237617 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 11:34:31.523722 1237617 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 11:34:31.624581 1237617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 11:34:31.637057 1237617 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 11:34:31.653854 1237617 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0429 11:34:31.653921 1237617 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 11:34:31.663733 1237617 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 11:34:31.663808 1237617 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 11:34:31.674262 1237617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 11:34:31.684801 1237617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 11:34:31.695750 1237617 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 11:34:31.704629 1237617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 11:34:31.714536 1237617 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 11:34:31.730044 1237617 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 11:34:31.739621 1237617 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 11:34:31.748064 1237617 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 11:34:31.756880 1237617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 11:34:31.837643 1237617 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 11:34:31.960951 1237617 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 11:34:31.961087 1237617 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 11:34:31.964456 1237617 start.go:562] Will wait 60s for crictl version
	I0429 11:34:31.964526 1237617 ssh_runner.go:195] Run: which crictl
	I0429 11:34:31.967870 1237617 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 11:34:32.012494 1237617 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0429 11:34:32.012648 1237617 ssh_runner.go:195] Run: crio --version
	I0429 11:34:32.056226 1237617 ssh_runner.go:195] Run: crio --version
	I0429 11:34:32.102370 1237617 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.24.6 ...
	I0429 11:34:32.104264 1237617 cli_runner.go:164] Run: docker network inspect addons-760922 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0429 11:34:32.119339 1237617 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0429 11:34:32.123001 1237617 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 11:34:32.133991 1237617 kubeadm.go:877] updating cluster {Name:addons-760922 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-760922 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 11:34:32.134115 1237617 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 11:34:32.134172 1237617 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 11:34:32.214515 1237617 crio.go:514] all images are preloaded for cri-o runtime.
	I0429 11:34:32.214537 1237617 crio.go:433] Images already preloaded, skipping extraction
	I0429 11:34:32.214593 1237617 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 11:34:32.254574 1237617 crio.go:514] all images are preloaded for cri-o runtime.
	I0429 11:34:32.254596 1237617 cache_images.go:84] Images are preloaded, skipping loading
	I0429 11:34:32.254606 1237617 kubeadm.go:928] updating node { 192.168.49.2 8443 v1.30.0 crio true true} ...
	I0429 11:34:32.254696 1237617 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-760922 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:addons-760922 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 11:34:32.254785 1237617 ssh_runner.go:195] Run: crio config
	I0429 11:34:32.309210 1237617 cni.go:84] Creating CNI manager for ""
	I0429 11:34:32.309239 1237617 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0429 11:34:32.309260 1237617 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 11:34:32.309284 1237617 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-760922 NodeName:addons-760922 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 11:34:32.309435 1237617 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-760922"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 11:34:32.309511 1237617 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 11:34:32.318363 1237617 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 11:34:32.318453 1237617 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0429 11:34:32.327123 1237617 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0429 11:34:32.344858 1237617 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 11:34:32.362998 1237617 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0429 11:34:32.381412 1237617 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0429 11:34:32.384769 1237617 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 11:34:32.395516 1237617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 11:34:32.474980 1237617 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 11:34:32.489258 1237617 certs.go:68] Setting up /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922 for IP: 192.168.49.2
	I0429 11:34:32.489323 1237617 certs.go:194] generating shared ca certs ...
	I0429 11:34:32.489353 1237617 certs.go:226] acquiring lock for ca certs: {Name:mkcd7972b318778b7d6fba570abab6a01a410b21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:34:32.489937 1237617 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18756-1231546/.minikube/ca.key
	I0429 11:34:32.998001 1237617 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18756-1231546/.minikube/ca.crt ...
	I0429 11:34:32.998036 1237617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18756-1231546/.minikube/ca.crt: {Name:mkb55926e354b45a8c55ecd39aada1a07cffe5eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:34:32.998764 1237617 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18756-1231546/.minikube/ca.key ...
	I0429 11:34:32.998785 1237617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18756-1231546/.minikube/ca.key: {Name:mk8a44ed64694b47d09bdbf0fe8c051b92db4b05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:34:32.999278 1237617 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18756-1231546/.minikube/proxy-client-ca.key
	I0429 11:34:33.666284 1237617 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18756-1231546/.minikube/proxy-client-ca.crt ...
	I0429 11:34:33.666321 1237617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18756-1231546/.minikube/proxy-client-ca.crt: {Name:mk0b17e32528870a0304f5efb5bd105bfe4ea76f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:34:33.667942 1237617 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18756-1231546/.minikube/proxy-client-ca.key ...
	I0429 11:34:33.667963 1237617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18756-1231546/.minikube/proxy-client-ca.key: {Name:mk2232e658101caf3170828b2d9085d74040565c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:34:33.668066 1237617 certs.go:256] generating profile certs ...
	I0429 11:34:33.668137 1237617 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/client.key
	I0429 11:34:33.668156 1237617 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/client.crt with IP's: []
	I0429 11:34:34.115413 1237617 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/client.crt ...
	I0429 11:34:34.115443 1237617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/client.crt: {Name:mkaf430a8e01e9f887def27f4fea1ff97047a47a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:34:34.116233 1237617 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/client.key ...
	I0429 11:34:34.116248 1237617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/client.key: {Name:mkf0c79492aed487569621f8e1d1da25488184a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:34:34.116974 1237617 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/apiserver.key.9e4bbe06
	I0429 11:34:34.117026 1237617 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/apiserver.crt.9e4bbe06 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0429 11:34:34.514926 1237617 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/apiserver.crt.9e4bbe06 ...
	I0429 11:34:34.514961 1237617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/apiserver.crt.9e4bbe06: {Name:mkcc54edb63d4378732201e25d52f4dc767bf62f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:34:34.515797 1237617 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/apiserver.key.9e4bbe06 ...
	I0429 11:34:34.515820 1237617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/apiserver.key.9e4bbe06: {Name:mk7f67c28d85bf9bd0e58476f231878c3993570e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:34:34.515923 1237617 certs.go:381] copying /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/apiserver.crt.9e4bbe06 -> /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/apiserver.crt
	I0429 11:34:34.516012 1237617 certs.go:385] copying /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/apiserver.key.9e4bbe06 -> /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/apiserver.key
	I0429 11:34:34.516076 1237617 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/proxy-client.key
	I0429 11:34:34.516100 1237617 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/proxy-client.crt with IP's: []
	I0429 11:34:34.778633 1237617 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/proxy-client.crt ...
	I0429 11:34:34.778665 1237617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/proxy-client.crt: {Name:mka241b7d61c28da857e5d409dc54c02b4d839d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:34:34.779397 1237617 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/proxy-client.key ...
	I0429 11:34:34.779415 1237617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/proxy-client.key: {Name:mk20dedc655d9364df65b3d45460fa539e0ebbf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:34:34.779643 1237617 certs.go:484] found cert: /home/jenkins/minikube-integration/18756-1231546/.minikube/certs/ca-key.pem (1679 bytes)
	I0429 11:34:34.779685 1237617 certs.go:484] found cert: /home/jenkins/minikube-integration/18756-1231546/.minikube/certs/ca.pem (1082 bytes)
	I0429 11:34:34.779715 1237617 certs.go:484] found cert: /home/jenkins/minikube-integration/18756-1231546/.minikube/certs/cert.pem (1123 bytes)
	I0429 11:34:34.779742 1237617 certs.go:484] found cert: /home/jenkins/minikube-integration/18756-1231546/.minikube/certs/key.pem (1675 bytes)
	I0429 11:34:34.780352 1237617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18756-1231546/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 11:34:34.807477 1237617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18756-1231546/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0429 11:34:34.832809 1237617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18756-1231546/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 11:34:34.857752 1237617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18756-1231546/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 11:34:34.883588 1237617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0429 11:34:34.909306 1237617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0429 11:34:34.933531 1237617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 11:34:34.957908 1237617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0429 11:34:34.982334 1237617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18756-1231546/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 11:34:35.008950 1237617 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 11:34:35.029900 1237617 ssh_runner.go:195] Run: openssl version
	I0429 11:34:35.036021 1237617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 11:34:35.046023 1237617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 11:34:35.049667 1237617 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 11:34 /usr/share/ca-certificates/minikubeCA.pem
	I0429 11:34:35.049780 1237617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 11:34:35.056537 1237617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 11:34:35.066224 1237617 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 11:34:35.069528 1237617 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0429 11:34:35.069595 1237617 kubeadm.go:391] StartCluster: {Name:addons-760922 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-760922 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 11:34:35.069686 1237617 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0429 11:34:35.069745 1237617 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 11:34:35.108603 1237617 cri.go:89] found id: ""
	I0429 11:34:35.108698 1237617 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0429 11:34:35.118054 1237617 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 11:34:35.127520 1237617 kubeadm.go:213] ignoring SystemVerification for kubeadm because of docker driver
	I0429 11:34:35.127619 1237617 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 11:34:35.136788 1237617 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 11:34:35.136821 1237617 kubeadm.go:156] found existing configuration files:
	
	I0429 11:34:35.136929 1237617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 11:34:35.146033 1237617 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 11:34:35.146103 1237617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 11:34:35.155261 1237617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 11:34:35.165524 1237617 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 11:34:35.165623 1237617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 11:34:35.174371 1237617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 11:34:35.183423 1237617 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 11:34:35.183489 1237617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 11:34:35.192319 1237617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 11:34:35.201475 1237617 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 11:34:35.201537 1237617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 11:34:35.209990 1237617 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0429 11:34:35.255974 1237617 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0429 11:34:35.256036 1237617 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 11:34:35.298864 1237617 kubeadm.go:309] [preflight] The system verification failed. Printing the output from the verification:
	I0429 11:34:35.298940 1237617 kubeadm.go:309] KERNEL_VERSION: 5.15.0-1058-aws
	I0429 11:34:35.298982 1237617 kubeadm.go:309] OS: Linux
	I0429 11:34:35.299032 1237617 kubeadm.go:309] CGROUPS_CPU: enabled
	I0429 11:34:35.299101 1237617 kubeadm.go:309] CGROUPS_CPUACCT: enabled
	I0429 11:34:35.299154 1237617 kubeadm.go:309] CGROUPS_CPUSET: enabled
	I0429 11:34:35.299205 1237617 kubeadm.go:309] CGROUPS_DEVICES: enabled
	I0429 11:34:35.299256 1237617 kubeadm.go:309] CGROUPS_FREEZER: enabled
	I0429 11:34:35.299307 1237617 kubeadm.go:309] CGROUPS_MEMORY: enabled
	I0429 11:34:35.299358 1237617 kubeadm.go:309] CGROUPS_PIDS: enabled
	I0429 11:34:35.299409 1237617 kubeadm.go:309] CGROUPS_HUGETLB: enabled
	I0429 11:34:35.299457 1237617 kubeadm.go:309] CGROUPS_BLKIO: enabled
	I0429 11:34:35.373866 1237617 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 11:34:35.373982 1237617 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 11:34:35.374079 1237617 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 11:34:35.617262 1237617 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 11:34:35.621108 1237617 out.go:204]   - Generating certificates and keys ...
	I0429 11:34:35.621292 1237617 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 11:34:35.621394 1237617 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 11:34:35.863755 1237617 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0429 11:34:36.145894 1237617 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0429 11:34:36.772519 1237617 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0429 11:34:37.389059 1237617 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0429 11:34:37.904059 1237617 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0429 11:34:37.904515 1237617 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-760922 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0429 11:34:38.437819 1237617 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0429 11:34:38.438127 1237617 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-760922 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0429 11:34:39.028427 1237617 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0429 11:34:39.432989 1237617 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0429 11:34:39.687561 1237617 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0429 11:34:39.687799 1237617 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 11:34:39.970112 1237617 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 11:34:40.200280 1237617 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0429 11:34:40.917641 1237617 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 11:34:41.704991 1237617 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 11:34:42.461669 1237617 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 11:34:42.462761 1237617 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 11:34:42.472253 1237617 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 11:34:42.474310 1237617 out.go:204]   - Booting up control plane ...
	I0429 11:34:42.474408 1237617 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 11:34:42.474484 1237617 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 11:34:42.475127 1237617 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 11:34:42.486375 1237617 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 11:34:42.487421 1237617 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 11:34:42.487647 1237617 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 11:34:42.582536 1237617 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0429 11:34:42.582623 1237617 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0429 11:34:44.584125 1237617 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 2.001461006s
	I0429 11:34:44.584229 1237617 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0429 11:34:50.586103 1237617 kubeadm.go:309] [api-check] The API server is healthy after 6.002193018s
	I0429 11:34:50.605271 1237617 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0429 11:34:50.623113 1237617 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0429 11:34:50.645190 1237617 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0429 11:34:50.645420 1237617 kubeadm.go:309] [mark-control-plane] Marking the node addons-760922 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0429 11:34:50.660126 1237617 kubeadm.go:309] [bootstrap-token] Using token: niags0.uqyndtemqqmk9gvx
	I0429 11:34:50.662507 1237617 out.go:204]   - Configuring RBAC rules ...
	I0429 11:34:50.662649 1237617 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0429 11:34:50.667330 1237617 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0429 11:34:50.675032 1237617 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0429 11:34:50.678549 1237617 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0429 11:34:50.684076 1237617 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0429 11:34:50.688178 1237617 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0429 11:34:50.992818 1237617 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0429 11:34:51.450680 1237617 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0429 11:34:51.992103 1237617 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0429 11:34:51.993396 1237617 kubeadm.go:309] 
	I0429 11:34:51.993472 1237617 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0429 11:34:51.993485 1237617 kubeadm.go:309] 
	I0429 11:34:51.993560 1237617 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0429 11:34:51.993569 1237617 kubeadm.go:309] 
	I0429 11:34:51.993595 1237617 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0429 11:34:51.993655 1237617 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0429 11:34:51.993710 1237617 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0429 11:34:51.993719 1237617 kubeadm.go:309] 
	I0429 11:34:51.993778 1237617 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0429 11:34:51.993787 1237617 kubeadm.go:309] 
	I0429 11:34:51.993833 1237617 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0429 11:34:51.993842 1237617 kubeadm.go:309] 
	I0429 11:34:51.993892 1237617 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0429 11:34:51.993967 1237617 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0429 11:34:51.994036 1237617 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0429 11:34:51.994044 1237617 kubeadm.go:309] 
	I0429 11:34:51.994126 1237617 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0429 11:34:51.994203 1237617 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0429 11:34:51.994210 1237617 kubeadm.go:309] 
	I0429 11:34:51.994290 1237617 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token niags0.uqyndtemqqmk9gvx \
	I0429 11:34:51.994392 1237617 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:76846a2c6b2d6c4faa2ca5b730d7f0eab7128ed63e643e5b107de948d1d74ce5 \
	I0429 11:34:51.994415 1237617 kubeadm.go:309] 	--control-plane 
	I0429 11:34:51.994428 1237617 kubeadm.go:309] 
	I0429 11:34:51.994512 1237617 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0429 11:34:51.994521 1237617 kubeadm.go:309] 
	I0429 11:34:51.994599 1237617 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token niags0.uqyndtemqqmk9gvx \
	I0429 11:34:51.994700 1237617 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:76846a2c6b2d6c4faa2ca5b730d7f0eab7128ed63e643e5b107de948d1d74ce5 
	I0429 11:34:51.997859 1237617 kubeadm.go:309] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1058-aws\n", err: exit status 1
	I0429 11:34:51.997979 1237617 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 11:34:51.997999 1237617 cni.go:84] Creating CNI manager for ""
	I0429 11:34:51.998011 1237617 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0429 11:34:51.999955 1237617 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0429 11:34:52.002580 1237617 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0429 11:34:52.008870 1237617 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0429 11:34:52.008896 1237617 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0429 11:34:52.029513 1237617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0429 11:34:52.333444 1237617 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0429 11:34:52.333512 1237617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:34:52.333664 1237617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-760922 minikube.k8s.io/updated_at=2024_04_29T11_34_52_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=8dbdf0965c47dd73171cfc89e1a9c75505f7a22d minikube.k8s.io/name=addons-760922 minikube.k8s.io/primary=true
	I0429 11:34:52.513796 1237617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:34:52.513854 1237617 ops.go:34] apiserver oom_adj: -16
	I0429 11:34:53.014079 1237617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:34:53.514383 1237617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:34:54.014482 1237617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:34:54.514459 1237617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:34:55.014031 1237617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:34:55.514460 1237617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:34:56.014460 1237617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:34:56.514713 1237617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:34:57.013947 1237617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:34:57.514528 1237617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:34:58.014864 1237617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:34:58.514796 1237617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:34:59.014612 1237617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:34:59.514309 1237617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:35:00.018196 1237617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:35:00.514807 1237617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:35:01.013956 1237617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:35:01.514391 1237617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:35:02.014498 1237617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:35:02.514228 1237617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:35:03.013974 1237617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:35:03.513915 1237617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:35:04.014166 1237617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:35:04.514195 1237617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:35:05.014664 1237617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:35:05.513940 1237617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:35:05.639492 1237617 kubeadm.go:1107] duration metric: took 13.306043821s to wait for elevateKubeSystemPrivileges
	W0429 11:35:05.639524 1237617 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0429 11:35:05.639531 1237617 kubeadm.go:393] duration metric: took 30.569958108s to StartCluster
	I0429 11:35:05.639545 1237617 settings.go:142] acquiring lock: {Name:mk0ef22430695db96615335cd2f3ba564b8d0f0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:35:05.640148 1237617 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18756-1231546/kubeconfig
	I0429 11:35:05.640527 1237617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18756-1231546/kubeconfig: {Name:mk3a783043373f26fbcf8c9fca1b15742ae22d84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:35:05.641143 1237617 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 11:35:05.643291 1237617 out.go:177] * Verifying Kubernetes components...
	I0429 11:35:05.641232 1237617 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0429 11:35:05.641414 1237617 config.go:182] Loaded profile config "addons-760922": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 11:35:05.641422 1237617 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0429 11:35:05.645473 1237617 addons.go:69] Setting yakd=true in profile "addons-760922"
	I0429 11:35:05.645499 1237617 addons.go:234] Setting addon yakd=true in "addons-760922"
	I0429 11:35:05.645528 1237617 host.go:66] Checking if "addons-760922" exists ...
	I0429 11:35:05.646021 1237617 cli_runner.go:164] Run: docker container inspect addons-760922 --format={{.State.Status}}
	I0429 11:35:05.646125 1237617 addons.go:69] Setting ingress=true in profile "addons-760922"
	I0429 11:35:05.646153 1237617 addons.go:234] Setting addon ingress=true in "addons-760922"
	I0429 11:35:05.646192 1237617 host.go:66] Checking if "addons-760922" exists ...
	I0429 11:35:05.646549 1237617 cli_runner.go:164] Run: docker container inspect addons-760922 --format={{.State.Status}}
	I0429 11:35:05.646990 1237617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 11:35:05.647146 1237617 addons.go:69] Setting cloud-spanner=true in profile "addons-760922"
	I0429 11:35:05.647164 1237617 addons.go:234] Setting addon cloud-spanner=true in "addons-760922"
	I0429 11:35:05.647184 1237617 host.go:66] Checking if "addons-760922" exists ...
	I0429 11:35:05.647532 1237617 cli_runner.go:164] Run: docker container inspect addons-760922 --format={{.State.Status}}
	I0429 11:35:05.647836 1237617 addons.go:69] Setting ingress-dns=true in profile "addons-760922"
	I0429 11:35:05.647859 1237617 addons.go:234] Setting addon ingress-dns=true in "addons-760922"
	I0429 11:35:05.647895 1237617 host.go:66] Checking if "addons-760922" exists ...
	I0429 11:35:05.648269 1237617 cli_runner.go:164] Run: docker container inspect addons-760922 --format={{.State.Status}}
	I0429 11:35:05.650422 1237617 addons.go:69] Setting inspektor-gadget=true in profile "addons-760922"
	I0429 11:35:05.650453 1237617 addons.go:234] Setting addon inspektor-gadget=true in "addons-760922"
	I0429 11:35:05.650477 1237617 host.go:66] Checking if "addons-760922" exists ...
	I0429 11:35:05.650853 1237617 cli_runner.go:164] Run: docker container inspect addons-760922 --format={{.State.Status}}
	I0429 11:35:05.664756 1237617 addons.go:69] Setting metrics-server=true in profile "addons-760922"
	I0429 11:35:05.664808 1237617 addons.go:234] Setting addon metrics-server=true in "addons-760922"
	I0429 11:35:05.664846 1237617 host.go:66] Checking if "addons-760922" exists ...
	I0429 11:35:05.665296 1237617 cli_runner.go:164] Run: docker container inspect addons-760922 --format={{.State.Status}}
	I0429 11:35:05.665569 1237617 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-760922"
	I0429 11:35:05.665738 1237617 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-760922"
	I0429 11:35:05.665828 1237617 host.go:66] Checking if "addons-760922" exists ...
	I0429 11:35:05.675245 1237617 cli_runner.go:164] Run: docker container inspect addons-760922 --format={{.State.Status}}
	I0429 11:35:05.665896 1237617 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-760922"
	I0429 11:35:05.705384 1237617 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-760922"
	I0429 11:35:05.705452 1237617 host.go:66] Checking if "addons-760922" exists ...
	I0429 11:35:05.705917 1237617 cli_runner.go:164] Run: docker container inspect addons-760922 --format={{.State.Status}}
	I0429 11:35:05.665902 1237617 addons.go:69] Setting registry=true in profile "addons-760922"
	I0429 11:35:05.665906 1237617 addons.go:69] Setting storage-provisioner=true in profile "addons-760922"
	I0429 11:35:05.665910 1237617 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-760922"
	I0429 11:35:05.665916 1237617 addons.go:69] Setting volumesnapshots=true in profile "addons-760922"
	I0429 11:35:05.671332 1237617 addons.go:69] Setting default-storageclass=true in profile "addons-760922"
	I0429 11:35:05.671348 1237617 addons.go:69] Setting gcp-auth=true in profile "addons-760922"
	I0429 11:35:05.727349 1237617 mustload.go:65] Loading cluster: addons-760922
	I0429 11:35:05.727524 1237617 config.go:182] Loaded profile config "addons-760922": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 11:35:05.727767 1237617 cli_runner.go:164] Run: docker container inspect addons-760922 --format={{.State.Status}}
	I0429 11:35:05.742410 1237617 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	I0429 11:35:05.745795 1237617 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0429 11:35:05.747872 1237617 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0429 11:35:05.753750 1237617 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0429 11:35:05.753771 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0429 11:35:05.753837 1237617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-760922
	I0429 11:35:05.761142 1237617 addons.go:234] Setting addon registry=true in "addons-760922"
	I0429 11:35:05.761194 1237617 host.go:66] Checking if "addons-760922" exists ...
	I0429 11:35:05.761628 1237617 cli_runner.go:164] Run: docker container inspect addons-760922 --format={{.State.Status}}
	I0429 11:35:05.786162 1237617 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0429 11:35:05.788860 1237617 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0429 11:35:05.788924 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0429 11:35:05.789037 1237617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-760922
	I0429 11:35:05.777403 1237617 addons.go:234] Setting addon storage-provisioner=true in "addons-760922"
	I0429 11:35:05.796921 1237617 host.go:66] Checking if "addons-760922" exists ...
	I0429 11:35:05.797388 1237617 cli_runner.go:164] Run: docker container inspect addons-760922 --format={{.State.Status}}
	I0429 11:35:05.777424 1237617 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-760922"
	I0429 11:35:05.810166 1237617 cli_runner.go:164] Run: docker container inspect addons-760922 --format={{.State.Status}}
	I0429 11:35:05.823435 1237617 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.27.0
	I0429 11:35:05.777447 1237617 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-760922"
	I0429 11:35:05.777436 1237617 addons.go:234] Setting addon volumesnapshots=true in "addons-760922"
	I0429 11:35:05.827897 1237617 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0429 11:35:05.827917 1237617 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0429 11:35:05.828219 1237617 cli_runner.go:164] Run: docker container inspect addons-760922 --format={{.State.Status}}
	I0429 11:35:05.832896 1237617 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.15
	I0429 11:35:05.834916 1237617 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0429 11:35:05.834936 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0429 11:35:05.835002 1237617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-760922
	I0429 11:35:05.833081 1237617 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0429 11:35:05.833136 1237617 host.go:66] Checking if "addons-760922" exists ...
	I0429 11:35:05.833149 1237617 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0429 11:35:05.833161 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0429 11:35:05.852770 1237617 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0429 11:35:05.858206 1237617 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0429 11:35:05.870428 1237617 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0429 11:35:05.868613 1237617 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	I0429 11:35:05.869092 1237617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-760922
	I0429 11:35:05.869398 1237617 cli_runner.go:164] Run: docker container inspect addons-760922 --format={{.State.Status}}
	I0429 11:35:05.870174 1237617 host.go:66] Checking if "addons-760922" exists ...
	I0429 11:35:05.889666 1237617 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0429 11:35:05.889727 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0429 11:35:05.889791 1237617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-760922
	I0429 11:35:05.901269 1237617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34278 SSHKeyPath:/home/jenkins/minikube-integration/18756-1231546/.minikube/machines/addons-760922/id_rsa Username:docker}
	I0429 11:35:05.901632 1237617 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0429 11:35:05.904782 1237617 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0429 11:35:05.910354 1237617 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0429 11:35:05.919070 1237617 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0429 11:35:05.906896 1237617 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0429 11:35:05.889695 1237617 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0429 11:35:05.928569 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0429 11:35:05.928751 1237617 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0429 11:35:05.928768 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0429 11:35:05.928800 1237617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-760922
	I0429 11:35:05.928944 1237617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-760922
	I0429 11:35:05.938835 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0429 11:35:05.938931 1237617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-760922
	I0429 11:35:05.962134 1237617 out.go:177]   - Using image docker.io/registry:2.8.3
	I0429 11:35:05.966136 1237617 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0429 11:35:05.964892 1237617 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-760922"
	I0429 11:35:05.967956 1237617 host.go:66] Checking if "addons-760922" exists ...
	I0429 11:35:05.968469 1237617 cli_runner.go:164] Run: docker container inspect addons-760922 --format={{.State.Status}}
	I0429 11:35:05.991056 1237617 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 11:35:05.989142 1237617 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0429 11:35:05.993242 1237617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34278 SSHKeyPath:/home/jenkins/minikube-integration/18756-1231546/.minikube/machines/addons-760922/id_rsa Username:docker}
	I0429 11:35:05.994775 1237617 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 11:35:05.994826 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0429 11:35:05.994914 1237617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-760922
	I0429 11:35:06.008952 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0429 11:35:06.009155 1237617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-760922
	I0429 11:35:06.027861 1237617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34278 SSHKeyPath:/home/jenkins/minikube-integration/18756-1231546/.minikube/machines/addons-760922/id_rsa Username:docker}
	I0429 11:35:06.032992 1237617 addons.go:234] Setting addon default-storageclass=true in "addons-760922"
	I0429 11:35:06.033033 1237617 host.go:66] Checking if "addons-760922" exists ...
	I0429 11:35:06.033749 1237617 cli_runner.go:164] Run: docker container inspect addons-760922 --format={{.State.Status}}
	I0429 11:35:06.063032 1237617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34278 SSHKeyPath:/home/jenkins/minikube-integration/18756-1231546/.minikube/machines/addons-760922/id_rsa Username:docker}
	I0429 11:35:06.090261 1237617 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0429 11:35:06.092021 1237617 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0429 11:35:06.092051 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0429 11:35:06.092120 1237617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-760922
	I0429 11:35:06.182815 1237617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34278 SSHKeyPath:/home/jenkins/minikube-integration/18756-1231546/.minikube/machines/addons-760922/id_rsa Username:docker}
	I0429 11:35:06.182900 1237617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0429 11:35:06.183267 1237617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34278 SSHKeyPath:/home/jenkins/minikube-integration/18756-1231546/.minikube/machines/addons-760922/id_rsa Username:docker}
	I0429 11:35:06.201075 1237617 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0429 11:35:06.197804 1237617 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0429 11:35:06.197861 1237617 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 11:35:06.209731 1237617 out.go:177]   - Using image docker.io/busybox:stable
	I0429 11:35:06.213599 1237617 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0429 11:35:06.213666 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0429 11:35:06.213753 1237617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-760922
	I0429 11:35:06.219440 1237617 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0429 11:35:06.219503 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0429 11:35:06.219582 1237617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-760922
	I0429 11:35:06.209494 1237617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34278 SSHKeyPath:/home/jenkins/minikube-integration/18756-1231546/.minikube/machines/addons-760922/id_rsa Username:docker}
	I0429 11:35:06.209664 1237617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34278 SSHKeyPath:/home/jenkins/minikube-integration/18756-1231546/.minikube/machines/addons-760922/id_rsa Username:docker}
	I0429 11:35:06.212311 1237617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34278 SSHKeyPath:/home/jenkins/minikube-integration/18756-1231546/.minikube/machines/addons-760922/id_rsa Username:docker}
	I0429 11:35:06.225290 1237617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34278 SSHKeyPath:/home/jenkins/minikube-integration/18756-1231546/.minikube/machines/addons-760922/id_rsa Username:docker}
	I0429 11:35:06.231301 1237617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34278 SSHKeyPath:/home/jenkins/minikube-integration/18756-1231546/.minikube/machines/addons-760922/id_rsa Username:docker}
	I0429 11:35:06.263628 1237617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34278 SSHKeyPath:/home/jenkins/minikube-integration/18756-1231546/.minikube/machines/addons-760922/id_rsa Username:docker}
	I0429 11:35:06.264376 1237617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34278 SSHKeyPath:/home/jenkins/minikube-integration/18756-1231546/.minikube/machines/addons-760922/id_rsa Username:docker}
	I0429 11:35:06.337457 1237617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0429 11:35:06.424087 1237617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0429 11:35:06.539124 1237617 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0429 11:35:06.539190 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0429 11:35:06.699280 1237617 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0429 11:35:06.699351 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0429 11:35:06.707349 1237617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0429 11:35:06.761601 1237617 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0429 11:35:06.761673 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0429 11:35:06.777229 1237617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0429 11:35:06.784634 1237617 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0429 11:35:06.784778 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0429 11:35:06.803750 1237617 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0429 11:35:06.803826 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0429 11:35:06.826840 1237617 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0429 11:35:06.826911 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0429 11:35:06.829376 1237617 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0429 11:35:06.829444 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0429 11:35:06.831984 1237617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 11:35:06.849598 1237617 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0429 11:35:06.849672 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0429 11:35:06.857731 1237617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0429 11:35:06.899127 1237617 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0429 11:35:06.899199 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0429 11:35:06.936297 1237617 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0429 11:35:06.936371 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0429 11:35:06.974947 1237617 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0429 11:35:06.975022 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0429 11:35:06.997649 1237617 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0429 11:35:06.997722 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0429 11:35:07.016732 1237617 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0429 11:35:07.016807 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0429 11:35:07.036494 1237617 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0429 11:35:07.036568 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0429 11:35:07.077873 1237617 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0429 11:35:07.077945 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0429 11:35:07.126908 1237617 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0429 11:35:07.126980 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0429 11:35:07.155867 1237617 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0429 11:35:07.155942 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0429 11:35:07.161066 1237617 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0429 11:35:07.161147 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0429 11:35:07.221387 1237617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0429 11:35:07.227338 1237617 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0429 11:35:07.227410 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0429 11:35:07.239033 1237617 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0429 11:35:07.239104 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0429 11:35:07.277033 1237617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0429 11:35:07.279138 1237617 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0429 11:35:07.279225 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0429 11:35:07.302613 1237617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0429 11:35:07.336747 1237617 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0429 11:35:07.336820 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0429 11:35:07.359613 1237617 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0429 11:35:07.359688 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0429 11:35:07.381889 1237617 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0429 11:35:07.381961 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0429 11:35:07.482025 1237617 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0429 11:35:07.482095 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0429 11:35:07.486475 1237617 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0429 11:35:07.486542 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0429 11:35:07.594806 1237617 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0429 11:35:07.594876 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0429 11:35:07.636424 1237617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0429 11:35:07.703123 1237617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0429 11:35:07.759902 1237617 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0429 11:35:07.759926 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0429 11:35:07.921934 1237617 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0429 11:35:07.921960 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0429 11:35:08.059628 1237617 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0429 11:35:08.059690 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0429 11:35:08.199643 1237617 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0429 11:35:08.199715 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0429 11:35:08.307541 1237617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0429 11:35:11.469331 1237617 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.286223453s)
	I0429 11:35:11.469872 1237617 addons.go:470] Verifying addon ingress=true in "addons-760922"
	I0429 11:35:11.472151 1237617 out.go:177] * Verifying ingress addon...
	I0429 11:35:11.469433 1237617 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (5.262883746s)
	I0429 11:35:11.469549 1237617 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.132073692s)
	I0429 11:35:11.469591 1237617 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.045433569s)
	I0429 11:35:11.469638 1237617 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.762214126s)
	I0429 11:35:11.469657 1237617 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.692352673s)
	I0429 11:35:11.469691 1237617 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.637653235s)
	I0429 11:35:11.469706 1237617 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.611909685s)
	I0429 11:35:11.469734 1237617 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.248274658s)
	I0429 11:35:11.469778 1237617 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.192683179s)
	I0429 11:35:11.469826 1237617 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.167136092s)
	I0429 11:35:11.469504 1237617 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.263187301s)
	I0429 11:35:11.474651 1237617 node_ready.go:35] waiting up to 6m0s for node "addons-760922" to be "Ready" ...
	I0429 11:35:11.475487 1237617 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0429 11:35:11.475641 1237617 start.go:946] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0429 11:35:11.476525 1237617 addons.go:470] Verifying addon registry=true in "addons-760922"
	I0429 11:35:11.478902 1237617 out.go:177] * Verifying registry addon...
	I0429 11:35:11.476688 1237617 addons.go:470] Verifying addon metrics-server=true in "addons-760922"
	I0429 11:35:11.483966 1237617 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-760922 service yakd-dashboard -n yakd-dashboard
	
	I0429 11:35:11.481615 1237617 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0429 11:35:11.496239 1237617 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0429 11:35:11.496265 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:11.505495 1237617 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0429 11:35:11.505523 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0429 11:35:11.522950 1237617 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0429 11:35:11.664622 1237617 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.028025343s)
	W0429 11:35:11.664718 1237617 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0429 11:35:11.664775 1237617 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.961417844s)
	I0429 11:35:11.664753 1237617 retry.go:31] will retry after 153.199739ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0429 11:35:11.818980 1237617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0429 11:35:11.919552 1237617 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.611877449s)
	I0429 11:35:11.919640 1237617 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-760922"
	I0429 11:35:11.923843 1237617 out.go:177] * Verifying csi-hostpath-driver addon...
	I0429 11:35:11.926303 1237617 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0429 11:35:12.010536 1237617 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0429 11:35:12.010567 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:12.025496 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:12.051337 1237617 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-760922" context rescaled to 1 replicas
	I0429 11:35:12.070932 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:12.435757 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:12.482069 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:12.491729 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:12.931395 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:12.982713 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:12.990178 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:13.430902 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:13.479968 1237617 node_ready.go:53] node "addons-760922" has status "Ready":"False"
	I0429 11:35:13.483415 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:13.490566 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:13.931068 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:13.981603 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:13.994992 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:14.192598 1237617 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0429 11:35:14.192715 1237617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-760922
	I0429 11:35:14.215217 1237617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34278 SSHKeyPath:/home/jenkins/minikube-integration/18756-1231546/.minikube/machines/addons-760922/id_rsa Username:docker}
	I0429 11:35:14.336529 1237617 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0429 11:35:14.358990 1237617 addons.go:234] Setting addon gcp-auth=true in "addons-760922"
	I0429 11:35:14.359044 1237617 host.go:66] Checking if "addons-760922" exists ...
	I0429 11:35:14.359499 1237617 cli_runner.go:164] Run: docker container inspect addons-760922 --format={{.State.Status}}
	I0429 11:35:14.390174 1237617 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0429 11:35:14.390226 1237617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-760922
	I0429 11:35:14.409026 1237617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34278 SSHKeyPath:/home/jenkins/minikube-integration/18756-1231546/.minikube/machines/addons-760922/id_rsa Username:docker}
	I0429 11:35:14.437549 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:14.505152 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:14.508764 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:14.931060 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:14.982378 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:14.989444 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:14.991091 1237617 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.171997425s)
	I0429 11:35:14.993594 1237617 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0429 11:35:14.995787 1237617 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0429 11:35:14.998206 1237617 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0429 11:35:14.998231 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0429 11:35:15.032287 1237617 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0429 11:35:15.032320 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0429 11:35:15.064481 1237617 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0429 11:35:15.064515 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0429 11:35:15.090795 1237617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0429 11:35:15.431860 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:15.486778 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:15.500808 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:15.800111 1237617 addons.go:470] Verifying addon gcp-auth=true in "addons-760922"
	I0429 11:35:15.805931 1237617 out.go:177] * Verifying gcp-auth addon...
	I0429 11:35:15.808567 1237617 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0429 11:35:15.825697 1237617 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0429 11:35:15.825723 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:15.931520 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:15.981408 1237617 node_ready.go:53] node "addons-760922" has status "Ready":"False"
	I0429 11:35:15.982948 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:16.011320 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:16.311975 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:16.431566 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:16.482627 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:16.491362 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:16.815462 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:16.931327 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:16.985205 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:16.990569 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:17.313082 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:17.431669 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:17.481757 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:17.489472 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:17.812523 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:17.931015 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:17.979775 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:17.989589 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:18.312507 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:18.431016 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:18.478063 1237617 node_ready.go:53] node "addons-760922" has status "Ready":"False"
	I0429 11:35:18.480047 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:18.489740 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:18.812043 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:18.932246 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:18.981504 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:18.990526 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:19.312118 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:19.431406 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:19.479732 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:19.489258 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:19.815555 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:19.931117 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:19.981553 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:19.990363 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:20.312465 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:20.430618 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:20.479688 1237617 node_ready.go:53] node "addons-760922" has status "Ready":"False"
	I0429 11:35:20.480647 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:20.489319 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:20.811751 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:20.930954 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:20.980968 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:20.990003 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:21.312496 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:21.430628 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:21.479745 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:21.489937 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:21.812123 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:21.930691 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:21.979790 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:21.989535 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:22.312617 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:22.431042 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:22.480813 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:22.489318 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:22.812795 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:22.931029 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:22.978998 1237617 node_ready.go:53] node "addons-760922" has status "Ready":"False"
	I0429 11:35:22.979504 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:22.990102 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:23.312831 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:23.430660 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:23.479678 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:23.489487 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:23.813087 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:23.930832 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:23.979826 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:23.989516 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:24.311894 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:24.430587 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:24.479406 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:24.490052 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:24.811976 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:24.933639 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:24.980261 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:24.990393 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:25.312578 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:25.430628 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:25.477558 1237617 node_ready.go:53] node "addons-760922" has status "Ready":"False"
	I0429 11:35:25.479949 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:25.490387 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:25.811623 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:25.931004 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:25.981637 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:25.989432 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:26.312604 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:26.430452 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:26.480346 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:26.490008 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:26.812656 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:26.930412 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:26.980511 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:26.989243 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:27.312280 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:27.431088 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:27.477776 1237617 node_ready.go:53] node "addons-760922" has status "Ready":"False"
	I0429 11:35:27.479638 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:27.490227 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:27.812198 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:27.930150 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:27.979658 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:27.990482 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:28.312515 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:28.431264 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:28.480594 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:28.489352 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:28.812460 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:28.931131 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:28.979413 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:28.990100 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:29.312627 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:29.430204 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:29.478269 1237617 node_ready.go:53] node "addons-760922" has status "Ready":"False"
	I0429 11:35:29.479357 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:29.489667 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:29.812284 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:29.931132 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:29.979615 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:29.989890 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:30.311851 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:30.430524 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:30.480773 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:30.489528 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:30.812465 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:30.931130 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:30.980166 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:30.990276 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:31.312299 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:31.430305 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:31.479966 1237617 node_ready.go:53] node "addons-760922" has status "Ready":"False"
	I0429 11:35:31.480352 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:31.489867 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:31.811800 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:31.930510 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:31.980282 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:31.989986 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:32.312596 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:32.430821 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:32.480165 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:32.489819 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:32.812393 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:32.930531 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:32.980241 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:32.990089 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:33.312489 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:33.431454 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:33.479867 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:33.489526 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:33.811899 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:33.931380 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:33.978864 1237617 node_ready.go:53] node "addons-760922" has status "Ready":"False"
	I0429 11:35:33.979622 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:33.989389 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:34.312618 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:34.430585 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:34.480330 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:34.490225 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:34.812078 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:34.931079 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:34.980151 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:34.999206 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:35.312350 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:35.430510 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:35.479777 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:35.489687 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:35.811599 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:35.930929 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:35.980014 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:35.990123 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:36.312120 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:36.431283 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:36.479887 1237617 node_ready.go:53] node "addons-760922" has status "Ready":"False"
	I0429 11:35:36.480278 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:36.489762 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:36.811607 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:36.931023 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:36.979948 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:36.990246 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:37.312583 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:37.430366 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:37.479980 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:37.489672 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:37.812114 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:37.931690 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:37.981371 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:37.990491 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:38.311800 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:38.434109 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:38.480138 1237617 node_ready.go:53] node "addons-760922" has status "Ready":"False"
	I0429 11:35:38.482466 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:38.489293 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:38.815345 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:38.998551 1237617 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0429 11:35:38.998623 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:39.004218 1237617 node_ready.go:49] node "addons-760922" has status "Ready":"True"
	I0429 11:35:39.004302 1237617 node_ready.go:38] duration metric: took 27.52961361s for node "addons-760922" to be "Ready" ...
	I0429 11:35:39.004339 1237617 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 11:35:39.027936 1237617 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0429 11:35:39.028010 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:39.032489 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:39.078274 1237617 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-hsk8z" in "kube-system" namespace to be "Ready" ...
	I0429 11:35:39.314809 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:39.436472 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:39.482310 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:39.510294 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:39.811568 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:39.931842 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:39.980402 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:39.990886 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:40.313663 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:40.433992 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:40.509293 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:40.510223 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:40.830470 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:40.932487 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:40.980553 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:40.992834 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:41.084986 1237617 pod_ready.go:92] pod "coredns-7db6d8ff4d-hsk8z" in "kube-system" namespace has status "Ready":"True"
	I0429 11:35:41.085016 1237617 pod_ready.go:81] duration metric: took 2.006666316s for pod "coredns-7db6d8ff4d-hsk8z" in "kube-system" namespace to be "Ready" ...
	I0429 11:35:41.085041 1237617 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-760922" in "kube-system" namespace to be "Ready" ...
	I0429 11:35:41.090770 1237617 pod_ready.go:92] pod "etcd-addons-760922" in "kube-system" namespace has status "Ready":"True"
	I0429 11:35:41.090795 1237617 pod_ready.go:81] duration metric: took 5.746347ms for pod "etcd-addons-760922" in "kube-system" namespace to be "Ready" ...
	I0429 11:35:41.090810 1237617 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-760922" in "kube-system" namespace to be "Ready" ...
	I0429 11:35:41.096191 1237617 pod_ready.go:92] pod "kube-apiserver-addons-760922" in "kube-system" namespace has status "Ready":"True"
	I0429 11:35:41.096217 1237617 pod_ready.go:81] duration metric: took 5.399526ms for pod "kube-apiserver-addons-760922" in "kube-system" namespace to be "Ready" ...
	I0429 11:35:41.096230 1237617 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-760922" in "kube-system" namespace to be "Ready" ...
	I0429 11:35:41.108991 1237617 pod_ready.go:92] pod "kube-controller-manager-addons-760922" in "kube-system" namespace has status "Ready":"True"
	I0429 11:35:41.109036 1237617 pod_ready.go:81] duration metric: took 12.779373ms for pod "kube-controller-manager-addons-760922" in "kube-system" namespace to be "Ready" ...
	I0429 11:35:41.109050 1237617 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-w598j" in "kube-system" namespace to be "Ready" ...
	I0429 11:35:41.118598 1237617 pod_ready.go:92] pod "kube-proxy-w598j" in "kube-system" namespace has status "Ready":"True"
	I0429 11:35:41.118626 1237617 pod_ready.go:81] duration metric: took 9.567232ms for pod "kube-proxy-w598j" in "kube-system" namespace to be "Ready" ...
	I0429 11:35:41.118639 1237617 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-760922" in "kube-system" namespace to be "Ready" ...
	I0429 11:35:41.312080 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:41.434984 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:41.485004 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:41.487686 1237617 pod_ready.go:92] pod "kube-scheduler-addons-760922" in "kube-system" namespace has status "Ready":"True"
	I0429 11:35:41.487712 1237617 pod_ready.go:81] duration metric: took 369.065003ms for pod "kube-scheduler-addons-760922" in "kube-system" namespace to be "Ready" ...
	I0429 11:35:41.487724 1237617 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-t8bst" in "kube-system" namespace to be "Ready" ...
	I0429 11:35:41.493841 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:41.813004 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:41.931789 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:41.979655 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:41.991891 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:42.312757 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:42.433630 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:42.480609 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:42.495667 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:42.813046 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:42.938434 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:42.981076 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:42.992285 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:43.312550 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:43.432908 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:43.480129 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:43.490518 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:43.494649 1237617 pod_ready.go:102] pod "metrics-server-c59844bb4-t8bst" in "kube-system" namespace has status "Ready":"False"
	I0429 11:35:43.812452 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:43.932825 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:43.980135 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:43.990831 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:44.312731 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:44.432306 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:44.482407 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:44.491427 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:44.813064 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:44.933612 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:44.982171 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:44.991015 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:45.314490 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:45.432477 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:45.480847 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:45.493238 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:45.496279 1237617 pod_ready.go:102] pod "metrics-server-c59844bb4-t8bst" in "kube-system" namespace has status "Ready":"False"
	I0429 11:35:45.812286 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:45.933580 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:45.982280 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:45.992133 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:46.312592 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:46.433202 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:46.481132 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:46.491984 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:46.817523 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:46.932927 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:46.980085 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:46.992352 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:47.313144 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:47.433936 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:47.480728 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:47.492040 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:47.501694 1237617 pod_ready.go:102] pod "metrics-server-c59844bb4-t8bst" in "kube-system" namespace has status "Ready":"False"
	I0429 11:35:47.818132 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:47.939084 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:47.981847 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:48.009837 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:48.314894 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:48.433353 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:48.481326 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:48.492588 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:48.816192 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:48.935581 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:48.984036 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:49.008137 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:49.313250 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:49.434155 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:49.481467 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:49.528177 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:49.531458 1237617 pod_ready.go:102] pod "metrics-server-c59844bb4-t8bst" in "kube-system" namespace has status "Ready":"False"
	I0429 11:35:49.826944 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:49.961261 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:49.981180 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:50.002836 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:50.314743 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:50.440501 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:50.483103 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:50.501197 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:50.813194 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:50.937151 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:50.981913 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:50.991474 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:51.314082 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:51.434689 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:51.482786 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:51.493467 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:51.813447 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:51.931933 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:51.980048 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:51.991913 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:51.994475 1237617 pod_ready.go:102] pod "metrics-server-c59844bb4-t8bst" in "kube-system" namespace has status "Ready":"False"
	I0429 11:35:52.312560 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:52.432702 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:52.479906 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:52.491501 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:52.818552 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:52.932582 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:52.979864 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:52.989940 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:53.314700 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:53.434194 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:53.484678 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:53.495492 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:53.813213 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:53.933758 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:53.981504 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:53.992624 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:54.024113 1237617 pod_ready.go:102] pod "metrics-server-c59844bb4-t8bst" in "kube-system" namespace has status "Ready":"False"
	I0429 11:35:54.313023 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:54.433173 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:54.480893 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:54.490892 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:54.813761 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:54.934168 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:54.984396 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:54.990982 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:55.312598 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:55.433733 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:55.480540 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:55.493548 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:55.812557 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:55.934926 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:56.006118 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:56.034630 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:56.036937 1237617 pod_ready.go:102] pod "metrics-server-c59844bb4-t8bst" in "kube-system" namespace has status "Ready":"False"
	I0429 11:35:56.312380 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:56.432518 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:56.480471 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:56.491429 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:56.812970 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:56.932703 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:57.007914 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:57.019533 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:57.316921 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:57.435047 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:57.481075 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:57.520881 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:57.813020 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:57.932401 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:57.981565 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:57.998066 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:58.312117 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:58.432375 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:58.479663 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:58.491683 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:58.495015 1237617 pod_ready.go:102] pod "metrics-server-c59844bb4-t8bst" in "kube-system" namespace has status "Ready":"False"
	I0429 11:35:58.812849 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:58.937297 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:58.980402 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:58.991384 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:59.313237 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:59.435080 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:59.480751 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:59.492323 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:59.812589 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:59.932435 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:59.981395 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:59.995724 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:00.329110 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:00.441925 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:00.486657 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:00.493293 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:00.497726 1237617 pod_ready.go:102] pod "metrics-server-c59844bb4-t8bst" in "kube-system" namespace has status "Ready":"False"
	I0429 11:36:00.812510 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:00.933315 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:00.980850 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:00.991696 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:01.312741 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:01.433297 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:01.481232 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:01.495230 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:01.813036 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:01.934908 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:01.980414 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:01.992322 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:02.312272 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:02.431835 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:02.479780 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:02.493496 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:02.812692 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:02.932585 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:02.980467 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:02.990764 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:02.995298 1237617 pod_ready.go:102] pod "metrics-server-c59844bb4-t8bst" in "kube-system" namespace has status "Ready":"False"
	I0429 11:36:03.312178 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:03.431540 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:03.480499 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:03.496700 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:03.812853 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:03.933596 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:03.980303 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:03.993258 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:04.313223 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:04.434017 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:04.481297 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:04.493022 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:04.813488 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:04.937546 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:04.980930 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:04.991454 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:05.013957 1237617 pod_ready.go:102] pod "metrics-server-c59844bb4-t8bst" in "kube-system" namespace has status "Ready":"False"
	I0429 11:36:05.313148 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:05.435078 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:05.486226 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:05.501530 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:05.812891 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:05.933081 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:05.980508 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:05.994408 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:06.312582 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:06.433411 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:06.480817 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:06.492194 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:06.812590 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:06.932695 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:06.980626 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:06.990359 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:07.312339 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:07.432991 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:07.481059 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:07.491021 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:07.496724 1237617 pod_ready.go:102] pod "metrics-server-c59844bb4-t8bst" in "kube-system" namespace has status "Ready":"False"
	I0429 11:36:07.812355 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:07.931579 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:07.980274 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:07.990698 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:08.312152 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:08.432081 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:08.480019 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:08.492521 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:08.812637 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:08.958437 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:08.985493 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:09.014069 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:09.313186 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:09.433924 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:09.482095 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:09.491047 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:09.498890 1237617 pod_ready.go:102] pod "metrics-server-c59844bb4-t8bst" in "kube-system" namespace has status "Ready":"False"
	I0429 11:36:09.812971 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:09.932164 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:09.980880 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:10.016032 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:10.313582 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:10.432275 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:10.480299 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:10.492761 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:10.812475 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:10.932660 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:10.995478 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:10.999156 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:11.315882 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:11.432114 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:11.480489 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:11.491863 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:11.811989 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:11.933279 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:11.981019 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:11.991643 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:12.001463 1237617 pod_ready.go:102] pod "metrics-server-c59844bb4-t8bst" in "kube-system" namespace has status "Ready":"False"
	I0429 11:36:12.312334 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:12.432749 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:12.480728 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:12.496986 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:12.812590 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:12.934119 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:12.989969 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:13.023998 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:13.313914 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:13.432908 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:13.482913 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:13.503799 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:13.812651 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:13.932963 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:13.983802 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:13.996553 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:14.022498 1237617 pod_ready.go:102] pod "metrics-server-c59844bb4-t8bst" in "kube-system" namespace has status "Ready":"False"
	I0429 11:36:14.312912 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:14.432767 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:14.480557 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:14.491380 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:14.811803 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:14.933972 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:14.981003 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:14.992408 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:15.314272 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:15.432111 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:15.480285 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:15.490981 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:15.811992 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:15.933212 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:15.980110 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:15.992103 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:16.316376 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:16.431620 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:16.479839 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:16.490336 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:16.495616 1237617 pod_ready.go:102] pod "metrics-server-c59844bb4-t8bst" in "kube-system" namespace has status "Ready":"False"
	I0429 11:36:16.813088 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:16.942794 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:16.980967 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:16.999537 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:17.314752 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:17.433076 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:17.480732 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:17.491517 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:17.812593 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:17.932159 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:17.980458 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:17.992452 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:18.312399 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:18.432135 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:18.480252 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:18.491224 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:18.499395 1237617 pod_ready.go:102] pod "metrics-server-c59844bb4-t8bst" in "kube-system" namespace has status "Ready":"False"
	I0429 11:36:18.817680 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:18.933555 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:18.979686 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:18.990349 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:19.312854 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:19.433015 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:19.481699 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:19.501315 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:19.813506 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:19.935990 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:19.980825 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:19.995890 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:20.312726 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:20.442476 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:20.479750 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:20.491710 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:20.812343 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:20.933918 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:20.980373 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:20.992411 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:21.000242 1237617 pod_ready.go:102] pod "metrics-server-c59844bb4-t8bst" in "kube-system" namespace has status "Ready":"False"
	I0429 11:36:21.312792 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:21.433429 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:21.480796 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:21.493357 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:21.813204 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:21.937503 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:21.982737 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:22.017612 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:22.312908 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:22.435063 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:22.480071 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:22.492205 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:22.814006 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:22.932715 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:22.984445 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:23.006121 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:23.006586 1237617 pod_ready.go:102] pod "metrics-server-c59844bb4-t8bst" in "kube-system" namespace has status "Ready":"False"
	I0429 11:36:23.313265 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:23.432352 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:23.480384 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:23.504044 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:23.812809 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:23.933876 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:23.980529 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:23.994516 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:24.321879 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:24.434929 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:24.482511 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:24.493724 1237617 kapi.go:107] duration metric: took 1m13.012100068s to wait for kubernetes.io/minikube-addons=registry ...
	I0429 11:36:24.812358 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:24.933322 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:24.980393 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:25.315581 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:25.440244 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:25.481553 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:25.497571 1237617 pod_ready.go:102] pod "metrics-server-c59844bb4-t8bst" in "kube-system" namespace has status "Ready":"False"
	I0429 11:36:25.813056 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:25.934239 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:25.980814 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:26.312412 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:26.433201 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:26.480945 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:26.813187 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:26.933703 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:26.981699 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:27.315958 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:27.437309 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:27.481843 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:27.812380 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:27.933040 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:27.981266 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:28.022251 1237617 pod_ready.go:102] pod "metrics-server-c59844bb4-t8bst" in "kube-system" namespace has status "Ready":"False"
	I0429 11:36:28.312925 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:28.433042 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:28.488579 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:28.812272 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:28.932536 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:28.980204 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:29.343789 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:29.432684 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:29.480949 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:29.812632 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:29.932764 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:29.980327 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:30.313595 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:30.433028 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:30.485401 1237617 kapi.go:107] duration metric: took 1m19.009910497s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0429 11:36:30.499117 1237617 pod_ready.go:102] pod "metrics-server-c59844bb4-t8bst" in "kube-system" namespace has status "Ready":"False"
	I0429 11:36:30.814545 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:30.932812 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:31.315983 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:31.433915 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:31.812411 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:31.932400 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:32.314514 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:32.434507 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:32.816925 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:32.935959 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:32.997219 1237617 pod_ready.go:102] pod "metrics-server-c59844bb4-t8bst" in "kube-system" namespace has status "Ready":"False"
	I0429 11:36:33.317120 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:33.432572 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:33.812626 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:33.932934 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:34.314617 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:34.436072 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:34.811936 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:34.932560 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:35.312639 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:35.434326 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:35.495516 1237617 pod_ready.go:102] pod "metrics-server-c59844bb4-t8bst" in "kube-system" namespace has status "Ready":"False"
	I0429 11:36:35.812640 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:35.933202 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:36.313024 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:36.435262 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:36.812102 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:36.933725 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:37.312349 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:37.433336 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:37.812358 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:37.931557 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:37.994307 1237617 pod_ready.go:102] pod "metrics-server-c59844bb4-t8bst" in "kube-system" namespace has status "Ready":"False"
	I0429 11:36:38.312568 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:38.436881 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:38.813498 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:38.932524 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:39.312860 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:39.432098 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:39.811682 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:39.931718 1237617 kapi.go:107] duration metric: took 1m28.005415179s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0429 11:36:39.995750 1237617 pod_ready.go:102] pod "metrics-server-c59844bb4-t8bst" in "kube-system" namespace has status "Ready":"False"
	I0429 11:36:40.313018 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:40.812719 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:41.312661 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:41.812505 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:42.313031 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:42.494481 1237617 pod_ready.go:102] pod "metrics-server-c59844bb4-t8bst" in "kube-system" namespace has status "Ready":"False"
	I0429 11:36:42.812926 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:43.312756 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:43.813008 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:44.311979 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:44.813800 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:44.993725 1237617 pod_ready.go:102] pod "metrics-server-c59844bb4-t8bst" in "kube-system" namespace has status "Ready":"False"
	I0429 11:36:45.313353 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:45.812496 1237617 kapi.go:107] duration metric: took 1m30.003934514s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0429 11:36:45.814419 1237617 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-760922 cluster.
	I0429 11:36:45.816192 1237617 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0429 11:36:45.818001 1237617 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0429 11:36:45.820140 1237617 out.go:177] * Enabled addons: ingress-dns, cloud-spanner, storage-provisioner, nvidia-device-plugin, metrics-server, yakd, default-storageclass, inspektor-gadget, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0429 11:36:45.822069 1237617 addons.go:505] duration metric: took 1m40.180633059s for enable addons: enabled=[ingress-dns cloud-spanner storage-provisioner nvidia-device-plugin metrics-server yakd default-storageclass inspektor-gadget volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0429 11:36:46.994012 1237617 pod_ready.go:102] pod "metrics-server-c59844bb4-t8bst" in "kube-system" namespace has status "Ready":"False"
	I0429 11:36:49.493917 1237617 pod_ready.go:102] pod "metrics-server-c59844bb4-t8bst" in "kube-system" namespace has status "Ready":"False"
	I0429 11:36:51.494087 1237617 pod_ready.go:102] pod "metrics-server-c59844bb4-t8bst" in "kube-system" namespace has status "Ready":"False"
	I0429 11:36:53.996275 1237617 pod_ready.go:102] pod "metrics-server-c59844bb4-t8bst" in "kube-system" namespace has status "Ready":"False"
	I0429 11:36:56.494729 1237617 pod_ready.go:92] pod "metrics-server-c59844bb4-t8bst" in "kube-system" namespace has status "Ready":"True"
	I0429 11:36:56.494757 1237617 pod_ready.go:81] duration metric: took 1m15.007025308s for pod "metrics-server-c59844bb4-t8bst" in "kube-system" namespace to be "Ready" ...
	I0429 11:36:56.494771 1237617 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-7lk7c" in "kube-system" namespace to be "Ready" ...
	I0429 11:36:56.503538 1237617 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-7lk7c" in "kube-system" namespace has status "Ready":"True"
	I0429 11:36:56.503565 1237617 pod_ready.go:81] duration metric: took 8.786453ms for pod "nvidia-device-plugin-daemonset-7lk7c" in "kube-system" namespace to be "Ready" ...
	I0429 11:36:56.503587 1237617 pod_ready.go:38] duration metric: took 1m17.499204851s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 11:36:56.503637 1237617 api_server.go:52] waiting for apiserver process to appear ...
	I0429 11:36:56.503684 1237617 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 11:36:56.503748 1237617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 11:36:56.557190 1237617 cri.go:89] found id: "a7a7309bbe8794b408f94c36d53409e0b22d3fc5e1565bf369b327e3b1812123"
	I0429 11:36:56.557221 1237617 cri.go:89] found id: ""
	I0429 11:36:56.557230 1237617 logs.go:276] 1 containers: [a7a7309bbe8794b408f94c36d53409e0b22d3fc5e1565bf369b327e3b1812123]
	I0429 11:36:56.557292 1237617 ssh_runner.go:195] Run: which crictl
	I0429 11:36:56.561667 1237617 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 11:36:56.561735 1237617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 11:36:56.604310 1237617 cri.go:89] found id: "18d8a3169373e74dc7d6856fbd9da3eac329d95311f62caae457e2f407b38092"
	I0429 11:36:56.604344 1237617 cri.go:89] found id: ""
	I0429 11:36:56.604353 1237617 logs.go:276] 1 containers: [18d8a3169373e74dc7d6856fbd9da3eac329d95311f62caae457e2f407b38092]
	I0429 11:36:56.604406 1237617 ssh_runner.go:195] Run: which crictl
	I0429 11:36:56.608066 1237617 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 11:36:56.608150 1237617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 11:36:56.650509 1237617 cri.go:89] found id: "f5aa390616f688ef8f9e965990765e53de3623070f536ce57d6c39292d45b853"
	I0429 11:36:56.650532 1237617 cri.go:89] found id: ""
	I0429 11:36:56.650541 1237617 logs.go:276] 1 containers: [f5aa390616f688ef8f9e965990765e53de3623070f536ce57d6c39292d45b853]
	I0429 11:36:56.650597 1237617 ssh_runner.go:195] Run: which crictl
	I0429 11:36:56.654183 1237617 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 11:36:56.654275 1237617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 11:36:56.700503 1237617 cri.go:89] found id: "0b4bf008d831035b0b1bdbf378c65024e650de9cc13691a43604af72b5561998"
	I0429 11:36:56.700528 1237617 cri.go:89] found id: ""
	I0429 11:36:56.700542 1237617 logs.go:276] 1 containers: [0b4bf008d831035b0b1bdbf378c65024e650de9cc13691a43604af72b5561998]
	I0429 11:36:56.700600 1237617 ssh_runner.go:195] Run: which crictl
	I0429 11:36:56.704099 1237617 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 11:36:56.704167 1237617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 11:36:56.745555 1237617 cri.go:89] found id: "f6d37371c711ba1179bb1a3cdba8018b5ce1f590a9bd4536719eb0f8757eb648"
	I0429 11:36:56.745575 1237617 cri.go:89] found id: ""
	I0429 11:36:56.745583 1237617 logs.go:276] 1 containers: [f6d37371c711ba1179bb1a3cdba8018b5ce1f590a9bd4536719eb0f8757eb648]
	I0429 11:36:56.745639 1237617 ssh_runner.go:195] Run: which crictl
	I0429 11:36:56.749072 1237617 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 11:36:56.749155 1237617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 11:36:56.803115 1237617 cri.go:89] found id: "836169ee36c10d25a5b392d25a39dc752379d87a719cdb052c530d840b879fb4"
	I0429 11:36:56.803141 1237617 cri.go:89] found id: ""
	I0429 11:36:56.803150 1237617 logs.go:276] 1 containers: [836169ee36c10d25a5b392d25a39dc752379d87a719cdb052c530d840b879fb4]
	I0429 11:36:56.803214 1237617 ssh_runner.go:195] Run: which crictl
	I0429 11:36:56.807011 1237617 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 11:36:56.807080 1237617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 11:36:56.848511 1237617 cri.go:89] found id: "d1c2bca5742231f59a4e7d5ebfca2b0ff8a43e410c36f420cb302b1714fcb172"
	I0429 11:36:56.848534 1237617 cri.go:89] found id: ""
	I0429 11:36:56.848542 1237617 logs.go:276] 1 containers: [d1c2bca5742231f59a4e7d5ebfca2b0ff8a43e410c36f420cb302b1714fcb172]
	I0429 11:36:56.848598 1237617 ssh_runner.go:195] Run: which crictl
	I0429 11:36:56.852131 1237617 logs.go:123] Gathering logs for dmesg ...
	I0429 11:36:56.852157 1237617 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 11:36:56.871317 1237617 logs.go:123] Gathering logs for kube-apiserver [a7a7309bbe8794b408f94c36d53409e0b22d3fc5e1565bf369b327e3b1812123] ...
	I0429 11:36:56.871344 1237617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a7a7309bbe8794b408f94c36d53409e0b22d3fc5e1565bf369b327e3b1812123"
	I0429 11:36:56.945284 1237617 logs.go:123] Gathering logs for etcd [18d8a3169373e74dc7d6856fbd9da3eac329d95311f62caae457e2f407b38092] ...
	I0429 11:36:56.945317 1237617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18d8a3169373e74dc7d6856fbd9da3eac329d95311f62caae457e2f407b38092"
	I0429 11:36:56.991448 1237617 logs.go:123] Gathering logs for kube-scheduler [0b4bf008d831035b0b1bdbf378c65024e650de9cc13691a43604af72b5561998] ...
	I0429 11:36:56.991479 1237617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b4bf008d831035b0b1bdbf378c65024e650de9cc13691a43604af72b5561998"
	I0429 11:36:57.040115 1237617 logs.go:123] Gathering logs for kube-controller-manager [836169ee36c10d25a5b392d25a39dc752379d87a719cdb052c530d840b879fb4] ...
	I0429 11:36:57.040145 1237617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 836169ee36c10d25a5b392d25a39dc752379d87a719cdb052c530d840b879fb4"
	I0429 11:36:57.109251 1237617 logs.go:123] Gathering logs for kubelet ...
	I0429 11:36:57.109286 1237617 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0429 11:36:57.162562 1237617 logs.go:138] Found kubelet problem: Apr 29 11:35:38 addons-760922 kubelet[1488]: W0429 11:35:38.678625    1488 reflector.go:547] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-760922" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-760922' and this object
	W0429 11:36:57.162815 1237617 logs.go:138] Found kubelet problem: Apr 29 11:35:38 addons-760922 kubelet[1488]: E0429 11:35:38.678665    1488 reflector.go:150] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-760922" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-760922' and this object
	I0429 11:36:57.208029 1237617 logs.go:123] Gathering logs for describe nodes ...
	I0429 11:36:57.208065 1237617 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 11:36:57.391682 1237617 logs.go:123] Gathering logs for coredns [f5aa390616f688ef8f9e965990765e53de3623070f536ce57d6c39292d45b853] ...
	I0429 11:36:57.391711 1237617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5aa390616f688ef8f9e965990765e53de3623070f536ce57d6c39292d45b853"
	I0429 11:36:57.435484 1237617 logs.go:123] Gathering logs for kube-proxy [f6d37371c711ba1179bb1a3cdba8018b5ce1f590a9bd4536719eb0f8757eb648] ...
	I0429 11:36:57.435515 1237617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6d37371c711ba1179bb1a3cdba8018b5ce1f590a9bd4536719eb0f8757eb648"
	I0429 11:36:57.478379 1237617 logs.go:123] Gathering logs for kindnet [d1c2bca5742231f59a4e7d5ebfca2b0ff8a43e410c36f420cb302b1714fcb172] ...
	I0429 11:36:57.478409 1237617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1c2bca5742231f59a4e7d5ebfca2b0ff8a43e410c36f420cb302b1714fcb172"
	I0429 11:36:57.517202 1237617 logs.go:123] Gathering logs for CRI-O ...
	I0429 11:36:57.517229 1237617 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 11:36:57.612778 1237617 logs.go:123] Gathering logs for container status ...
	I0429 11:36:57.612814 1237617 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 11:36:57.674139 1237617 out.go:304] Setting ErrFile to fd 2...
	I0429 11:36:57.674168 1237617 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0429 11:36:57.674242 1237617 out.go:239] X Problems detected in kubelet:
	W0429 11:36:57.674256 1237617 out.go:239]   Apr 29 11:35:38 addons-760922 kubelet[1488]: W0429 11:35:38.678625    1488 reflector.go:547] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-760922" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-760922' and this object
	W0429 11:36:57.674265 1237617 out.go:239]   Apr 29 11:35:38 addons-760922 kubelet[1488]: E0429 11:35:38.678665    1488 reflector.go:150] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-760922" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-760922' and this object
	I0429 11:36:57.674425 1237617 out.go:304] Setting ErrFile to fd 2...
	I0429 11:36:57.674434 1237617 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 11:37:07.676508 1237617 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 11:37:07.690220 1237617 api_server.go:72] duration metric: took 2m2.049038627s to wait for apiserver process to appear ...
	I0429 11:37:07.690245 1237617 api_server.go:88] waiting for apiserver healthz status ...
	I0429 11:37:07.690279 1237617 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 11:37:07.690351 1237617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 11:37:07.730930 1237617 cri.go:89] found id: "a7a7309bbe8794b408f94c36d53409e0b22d3fc5e1565bf369b327e3b1812123"
	I0429 11:37:07.730956 1237617 cri.go:89] found id: ""
	I0429 11:37:07.730964 1237617 logs.go:276] 1 containers: [a7a7309bbe8794b408f94c36d53409e0b22d3fc5e1565bf369b327e3b1812123]
	I0429 11:37:07.731023 1237617 ssh_runner.go:195] Run: which crictl
	I0429 11:37:07.734528 1237617 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 11:37:07.734613 1237617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 11:37:07.776755 1237617 cri.go:89] found id: "18d8a3169373e74dc7d6856fbd9da3eac329d95311f62caae457e2f407b38092"
	I0429 11:37:07.776779 1237617 cri.go:89] found id: ""
	I0429 11:37:07.776788 1237617 logs.go:276] 1 containers: [18d8a3169373e74dc7d6856fbd9da3eac329d95311f62caae457e2f407b38092]
	I0429 11:37:07.776846 1237617 ssh_runner.go:195] Run: which crictl
	I0429 11:37:07.780552 1237617 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 11:37:07.780624 1237617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 11:37:07.817292 1237617 cri.go:89] found id: "f5aa390616f688ef8f9e965990765e53de3623070f536ce57d6c39292d45b853"
	I0429 11:37:07.817313 1237617 cri.go:89] found id: ""
	I0429 11:37:07.817320 1237617 logs.go:276] 1 containers: [f5aa390616f688ef8f9e965990765e53de3623070f536ce57d6c39292d45b853]
	I0429 11:37:07.817395 1237617 ssh_runner.go:195] Run: which crictl
	I0429 11:37:07.820907 1237617 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 11:37:07.820974 1237617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 11:37:07.868213 1237617 cri.go:89] found id: "0b4bf008d831035b0b1bdbf378c65024e650de9cc13691a43604af72b5561998"
	I0429 11:37:07.868234 1237617 cri.go:89] found id: ""
	I0429 11:37:07.868242 1237617 logs.go:276] 1 containers: [0b4bf008d831035b0b1bdbf378c65024e650de9cc13691a43604af72b5561998]
	I0429 11:37:07.868328 1237617 ssh_runner.go:195] Run: which crictl
	I0429 11:37:07.871813 1237617 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 11:37:07.871884 1237617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 11:37:07.913835 1237617 cri.go:89] found id: "f6d37371c711ba1179bb1a3cdba8018b5ce1f590a9bd4536719eb0f8757eb648"
	I0429 11:37:07.913862 1237617 cri.go:89] found id: ""
	I0429 11:37:07.913871 1237617 logs.go:276] 1 containers: [f6d37371c711ba1179bb1a3cdba8018b5ce1f590a9bd4536719eb0f8757eb648]
	I0429 11:37:07.913953 1237617 ssh_runner.go:195] Run: which crictl
	I0429 11:37:07.917724 1237617 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 11:37:07.917796 1237617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 11:37:07.961900 1237617 cri.go:89] found id: "836169ee36c10d25a5b392d25a39dc752379d87a719cdb052c530d840b879fb4"
	I0429 11:37:07.961926 1237617 cri.go:89] found id: ""
	I0429 11:37:07.961935 1237617 logs.go:276] 1 containers: [836169ee36c10d25a5b392d25a39dc752379d87a719cdb052c530d840b879fb4]
	I0429 11:37:07.962004 1237617 ssh_runner.go:195] Run: which crictl
	I0429 11:37:07.965524 1237617 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 11:37:07.965595 1237617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 11:37:08.011070 1237617 cri.go:89] found id: "d1c2bca5742231f59a4e7d5ebfca2b0ff8a43e410c36f420cb302b1714fcb172"
	I0429 11:37:08.011095 1237617 cri.go:89] found id: ""
	I0429 11:37:08.011104 1237617 logs.go:276] 1 containers: [d1c2bca5742231f59a4e7d5ebfca2b0ff8a43e410c36f420cb302b1714fcb172]
	I0429 11:37:08.011170 1237617 ssh_runner.go:195] Run: which crictl
	I0429 11:37:08.014833 1237617 logs.go:123] Gathering logs for kube-scheduler [0b4bf008d831035b0b1bdbf378c65024e650de9cc13691a43604af72b5561998] ...
	I0429 11:37:08.014861 1237617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b4bf008d831035b0b1bdbf378c65024e650de9cc13691a43604af72b5561998"
	I0429 11:37:08.058312 1237617 logs.go:123] Gathering logs for kube-controller-manager [836169ee36c10d25a5b392d25a39dc752379d87a719cdb052c530d840b879fb4] ...
	I0429 11:37:08.058343 1237617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 836169ee36c10d25a5b392d25a39dc752379d87a719cdb052c530d840b879fb4"
	I0429 11:37:08.127521 1237617 logs.go:123] Gathering logs for kindnet [d1c2bca5742231f59a4e7d5ebfca2b0ff8a43e410c36f420cb302b1714fcb172] ...
	I0429 11:37:08.127555 1237617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1c2bca5742231f59a4e7d5ebfca2b0ff8a43e410c36f420cb302b1714fcb172"
	I0429 11:37:08.166924 1237617 logs.go:123] Gathering logs for dmesg ...
	I0429 11:37:08.166950 1237617 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 11:37:08.185349 1237617 logs.go:123] Gathering logs for describe nodes ...
	I0429 11:37:08.185379 1237617 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 11:37:08.339169 1237617 logs.go:123] Gathering logs for kube-apiserver [a7a7309bbe8794b408f94c36d53409e0b22d3fc5e1565bf369b327e3b1812123] ...
	I0429 11:37:08.339244 1237617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a7a7309bbe8794b408f94c36d53409e0b22d3fc5e1565bf369b327e3b1812123"
	I0429 11:37:08.417175 1237617 logs.go:123] Gathering logs for etcd [18d8a3169373e74dc7d6856fbd9da3eac329d95311f62caae457e2f407b38092] ...
	I0429 11:37:08.417220 1237617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18d8a3169373e74dc7d6856fbd9da3eac329d95311f62caae457e2f407b38092"
	I0429 11:37:08.469841 1237617 logs.go:123] Gathering logs for coredns [f5aa390616f688ef8f9e965990765e53de3623070f536ce57d6c39292d45b853] ...
	I0429 11:37:08.469875 1237617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5aa390616f688ef8f9e965990765e53de3623070f536ce57d6c39292d45b853"
	I0429 11:37:08.509773 1237617 logs.go:123] Gathering logs for kube-proxy [f6d37371c711ba1179bb1a3cdba8018b5ce1f590a9bd4536719eb0f8757eb648] ...
	I0429 11:37:08.509805 1237617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6d37371c711ba1179bb1a3cdba8018b5ce1f590a9bd4536719eb0f8757eb648"
	I0429 11:37:08.547447 1237617 logs.go:123] Gathering logs for CRI-O ...
	I0429 11:37:08.547477 1237617 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 11:37:08.638499 1237617 logs.go:123] Gathering logs for container status ...
	I0429 11:37:08.638535 1237617 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 11:37:08.699116 1237617 logs.go:123] Gathering logs for kubelet ...
	I0429 11:37:08.699146 1237617 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0429 11:37:08.744110 1237617 logs.go:138] Found kubelet problem: Apr 29 11:35:38 addons-760922 kubelet[1488]: W0429 11:35:38.678625    1488 reflector.go:547] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-760922" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-760922' and this object
	W0429 11:37:08.744338 1237617 logs.go:138] Found kubelet problem: Apr 29 11:35:38 addons-760922 kubelet[1488]: E0429 11:35:38.678665    1488 reflector.go:150] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-760922" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-760922' and this object
	I0429 11:37:08.789893 1237617 out.go:304] Setting ErrFile to fd 2...
	I0429 11:37:08.789921 1237617 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0429 11:37:08.789981 1237617 out.go:239] X Problems detected in kubelet:
	W0429 11:37:08.789995 1237617 out.go:239]   Apr 29 11:35:38 addons-760922 kubelet[1488]: W0429 11:35:38.678625    1488 reflector.go:547] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-760922" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-760922' and this object
	W0429 11:37:08.790003 1237617 out.go:239]   Apr 29 11:35:38 addons-760922 kubelet[1488]: E0429 11:35:38.678665    1488 reflector.go:150] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-760922" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-760922' and this object
	I0429 11:37:08.790013 1237617 out.go:304] Setting ErrFile to fd 2...
	I0429 11:37:08.790019 1237617 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 11:37:18.791207 1237617 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 11:37:18.798871 1237617 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0429 11:37:18.799777 1237617 api_server.go:141] control plane version: v1.30.0
	I0429 11:37:18.799803 1237617 api_server.go:131] duration metric: took 11.109550875s to wait for apiserver health ...
	I0429 11:37:18.799812 1237617 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 11:37:18.799834 1237617 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 11:37:18.799929 1237617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 11:37:18.841038 1237617 cri.go:89] found id: "a7a7309bbe8794b408f94c36d53409e0b22d3fc5e1565bf369b327e3b1812123"
	I0429 11:37:18.841059 1237617 cri.go:89] found id: ""
	I0429 11:37:18.841067 1237617 logs.go:276] 1 containers: [a7a7309bbe8794b408f94c36d53409e0b22d3fc5e1565bf369b327e3b1812123]
	I0429 11:37:18.841129 1237617 ssh_runner.go:195] Run: which crictl
	I0429 11:37:18.844932 1237617 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 11:37:18.845004 1237617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 11:37:18.884960 1237617 cri.go:89] found id: "18d8a3169373e74dc7d6856fbd9da3eac329d95311f62caae457e2f407b38092"
	I0429 11:37:18.884982 1237617 cri.go:89] found id: ""
	I0429 11:37:18.884991 1237617 logs.go:276] 1 containers: [18d8a3169373e74dc7d6856fbd9da3eac329d95311f62caae457e2f407b38092]
	I0429 11:37:18.885056 1237617 ssh_runner.go:195] Run: which crictl
	I0429 11:37:18.888727 1237617 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 11:37:18.888797 1237617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 11:37:18.925797 1237617 cri.go:89] found id: "f5aa390616f688ef8f9e965990765e53de3623070f536ce57d6c39292d45b853"
	I0429 11:37:18.925819 1237617 cri.go:89] found id: ""
	I0429 11:37:18.925827 1237617 logs.go:276] 1 containers: [f5aa390616f688ef8f9e965990765e53de3623070f536ce57d6c39292d45b853]
	I0429 11:37:18.925883 1237617 ssh_runner.go:195] Run: which crictl
	I0429 11:37:18.930000 1237617 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 11:37:18.930073 1237617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 11:37:18.972366 1237617 cri.go:89] found id: "0b4bf008d831035b0b1bdbf378c65024e650de9cc13691a43604af72b5561998"
	I0429 11:37:18.972459 1237617 cri.go:89] found id: ""
	I0429 11:37:18.972482 1237617 logs.go:276] 1 containers: [0b4bf008d831035b0b1bdbf378c65024e650de9cc13691a43604af72b5561998]
	I0429 11:37:18.972542 1237617 ssh_runner.go:195] Run: which crictl
	I0429 11:37:18.977687 1237617 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 11:37:18.977758 1237617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 11:37:19.023118 1237617 cri.go:89] found id: "f6d37371c711ba1179bb1a3cdba8018b5ce1f590a9bd4536719eb0f8757eb648"
	I0429 11:37:19.023143 1237617 cri.go:89] found id: ""
	I0429 11:37:19.023151 1237617 logs.go:276] 1 containers: [f6d37371c711ba1179bb1a3cdba8018b5ce1f590a9bd4536719eb0f8757eb648]
	I0429 11:37:19.023218 1237617 ssh_runner.go:195] Run: which crictl
	I0429 11:37:19.026910 1237617 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 11:37:19.027010 1237617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 11:37:19.065089 1237617 cri.go:89] found id: "836169ee36c10d25a5b392d25a39dc752379d87a719cdb052c530d840b879fb4"
	I0429 11:37:19.065119 1237617 cri.go:89] found id: ""
	I0429 11:37:19.065127 1237617 logs.go:276] 1 containers: [836169ee36c10d25a5b392d25a39dc752379d87a719cdb052c530d840b879fb4]
	I0429 11:37:19.065189 1237617 ssh_runner.go:195] Run: which crictl
	I0429 11:37:19.068812 1237617 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 11:37:19.068889 1237617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 11:37:19.110244 1237617 cri.go:89] found id: "d1c2bca5742231f59a4e7d5ebfca2b0ff8a43e410c36f420cb302b1714fcb172"
	I0429 11:37:19.110267 1237617 cri.go:89] found id: ""
	I0429 11:37:19.110275 1237617 logs.go:276] 1 containers: [d1c2bca5742231f59a4e7d5ebfca2b0ff8a43e410c36f420cb302b1714fcb172]
	I0429 11:37:19.110340 1237617 ssh_runner.go:195] Run: which crictl
	I0429 11:37:19.114048 1237617 logs.go:123] Gathering logs for describe nodes ...
	I0429 11:37:19.114075 1237617 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 11:37:19.259285 1237617 logs.go:123] Gathering logs for kube-apiserver [a7a7309bbe8794b408f94c36d53409e0b22d3fc5e1565bf369b327e3b1812123] ...
	I0429 11:37:19.259356 1237617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a7a7309bbe8794b408f94c36d53409e0b22d3fc5e1565bf369b327e3b1812123"
	I0429 11:37:19.315055 1237617 logs.go:123] Gathering logs for etcd [18d8a3169373e74dc7d6856fbd9da3eac329d95311f62caae457e2f407b38092] ...
	I0429 11:37:19.315098 1237617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18d8a3169373e74dc7d6856fbd9da3eac329d95311f62caae457e2f407b38092"
	I0429 11:37:19.363687 1237617 logs.go:123] Gathering logs for kube-scheduler [0b4bf008d831035b0b1bdbf378c65024e650de9cc13691a43604af72b5561998] ...
	I0429 11:37:19.363718 1237617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b4bf008d831035b0b1bdbf378c65024e650de9cc13691a43604af72b5561998"
	I0429 11:37:19.415781 1237617 logs.go:123] Gathering logs for kube-controller-manager [836169ee36c10d25a5b392d25a39dc752379d87a719cdb052c530d840b879fb4] ...
	I0429 11:37:19.415812 1237617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 836169ee36c10d25a5b392d25a39dc752379d87a719cdb052c530d840b879fb4"
	I0429 11:37:19.482069 1237617 logs.go:123] Gathering logs for CRI-O ...
	I0429 11:37:19.482105 1237617 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 11:37:19.579150 1237617 logs.go:123] Gathering logs for kubelet ...
	I0429 11:37:19.579186 1237617 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0429 11:37:19.608110 1237617 logs.go:138] Found kubelet problem: Apr 29 11:35:38 addons-760922 kubelet[1488]: W0429 11:35:38.678625    1488 reflector.go:547] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-760922" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-760922' and this object
	W0429 11:37:19.608336 1237617 logs.go:138] Found kubelet problem: Apr 29 11:35:38 addons-760922 kubelet[1488]: E0429 11:35:38.678665    1488 reflector.go:150] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-760922" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-760922' and this object
	I0429 11:37:19.666761 1237617 logs.go:123] Gathering logs for dmesg ...
	I0429 11:37:19.666795 1237617 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 11:37:19.686823 1237617 logs.go:123] Gathering logs for container status ...
	I0429 11:37:19.686853 1237617 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 11:37:19.744734 1237617 logs.go:123] Gathering logs for kindnet [d1c2bca5742231f59a4e7d5ebfca2b0ff8a43e410c36f420cb302b1714fcb172] ...
	I0429 11:37:19.744764 1237617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1c2bca5742231f59a4e7d5ebfca2b0ff8a43e410c36f420cb302b1714fcb172"
	I0429 11:37:19.786630 1237617 logs.go:123] Gathering logs for coredns [f5aa390616f688ef8f9e965990765e53de3623070f536ce57d6c39292d45b853] ...
	I0429 11:37:19.786658 1237617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5aa390616f688ef8f9e965990765e53de3623070f536ce57d6c39292d45b853"
	I0429 11:37:19.828486 1237617 logs.go:123] Gathering logs for kube-proxy [f6d37371c711ba1179bb1a3cdba8018b5ce1f590a9bd4536719eb0f8757eb648] ...
	I0429 11:37:19.828515 1237617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6d37371c711ba1179bb1a3cdba8018b5ce1f590a9bd4536719eb0f8757eb648"
	I0429 11:37:19.867452 1237617 out.go:304] Setting ErrFile to fd 2...
	I0429 11:37:19.867476 1237617 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0429 11:37:19.867529 1237617 out.go:239] X Problems detected in kubelet:
	W0429 11:37:19.867542 1237617 out.go:239]   Apr 29 11:35:38 addons-760922 kubelet[1488]: W0429 11:35:38.678625    1488 reflector.go:547] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-760922" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-760922' and this object
	W0429 11:37:19.867551 1237617 out.go:239]   Apr 29 11:35:38 addons-760922 kubelet[1488]: E0429 11:35:38.678665    1488 reflector.go:150] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-760922" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-760922' and this object
	I0429 11:37:19.867561 1237617 out.go:304] Setting ErrFile to fd 2...
	I0429 11:37:19.867567 1237617 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 11:37:29.878356 1237617 system_pods.go:59] 18 kube-system pods found
	I0429 11:37:29.878393 1237617 system_pods.go:61] "coredns-7db6d8ff4d-hsk8z" [a0643984-c7ce-414e-84c3-d69620f28409] Running
	I0429 11:37:29.878400 1237617 system_pods.go:61] "csi-hostpath-attacher-0" [7d2591eb-0cf8-452f-949c-f7df587938a4] Running
	I0429 11:37:29.878404 1237617 system_pods.go:61] "csi-hostpath-resizer-0" [9fbcbd11-6229-4f03-80c9-8211f06eb595] Running
	I0429 11:37:29.878409 1237617 system_pods.go:61] "csi-hostpathplugin-zvs7l" [c7e414f0-e537-4283-acbf-6f4e20086035] Running
	I0429 11:37:29.878413 1237617 system_pods.go:61] "etcd-addons-760922" [cfdfca13-5819-4d23-9247-b019d73ef52a] Running
	I0429 11:37:29.878418 1237617 system_pods.go:61] "kindnet-7gjxl" [2f72207f-2fad-412c-bab0-ce62cfb60658] Running
	I0429 11:37:29.878424 1237617 system_pods.go:61] "kube-apiserver-addons-760922" [c671106e-5d08-4cc2-a7fe-85880f52c9bb] Running
	I0429 11:37:29.878429 1237617 system_pods.go:61] "kube-controller-manager-addons-760922" [cf44cb04-07b0-4d0d-90df-e1e8f4500390] Running
	I0429 11:37:29.878439 1237617 system_pods.go:61] "kube-ingress-dns-minikube" [f32d87a8-ebd8-4285-adce-095ad8ceb09b] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0429 11:37:29.878451 1237617 system_pods.go:61] "kube-proxy-w598j" [5f30f9a6-4dff-4f0a-a330-b3776c8936d1] Running
	I0429 11:37:29.878457 1237617 system_pods.go:61] "kube-scheduler-addons-760922" [a64dc54e-fb5a-4f1f-823d-578fdd3e24a9] Running
	I0429 11:37:29.878464 1237617 system_pods.go:61] "metrics-server-c59844bb4-t8bst" [55fed84e-6197-4372-857c-598bbe503660] Running
	I0429 11:37:29.878468 1237617 system_pods.go:61] "nvidia-device-plugin-daemonset-7lk7c" [68690e1d-7f8a-4423-aaed-674894ca372a] Running
	I0429 11:37:29.878476 1237617 system_pods.go:61] "registry-proxy-m8xkv" [33630f46-f313-4cc6-9d44-213e6df0c519] Running
	I0429 11:37:29.878480 1237617 system_pods.go:61] "registry-tj9l7" [9bb8489a-b110-4d66-afb6-a31def145ada] Running
	I0429 11:37:29.878492 1237617 system_pods.go:61] "snapshot-controller-745499f584-dmhbn" [fe7ceb3f-8331-458c-a043-1cf4a4522c0b] Running
	I0429 11:37:29.878496 1237617 system_pods.go:61] "snapshot-controller-745499f584-rkskn" [e83761c3-e638-4da4-978d-42799f1a45fb] Running
	I0429 11:37:29.878500 1237617 system_pods.go:61] "storage-provisioner" [90f23e9d-3062-4587-bf6d-23d10fd60f3c] Running
	I0429 11:37:29.878506 1237617 system_pods.go:74] duration metric: took 11.078688787s to wait for pod list to return data ...
	I0429 11:37:29.878517 1237617 default_sa.go:34] waiting for default service account to be created ...
	I0429 11:37:29.881038 1237617 default_sa.go:45] found service account: "default"
	I0429 11:37:29.881065 1237617 default_sa.go:55] duration metric: took 2.541352ms for default service account to be created ...
	I0429 11:37:29.881075 1237617 system_pods.go:116] waiting for k8s-apps to be running ...
	I0429 11:37:29.891034 1237617 system_pods.go:86] 18 kube-system pods found
	I0429 11:37:29.891066 1237617 system_pods.go:89] "coredns-7db6d8ff4d-hsk8z" [a0643984-c7ce-414e-84c3-d69620f28409] Running
	I0429 11:37:29.891074 1237617 system_pods.go:89] "csi-hostpath-attacher-0" [7d2591eb-0cf8-452f-949c-f7df587938a4] Running
	I0429 11:37:29.891079 1237617 system_pods.go:89] "csi-hostpath-resizer-0" [9fbcbd11-6229-4f03-80c9-8211f06eb595] Running
	I0429 11:37:29.891100 1237617 system_pods.go:89] "csi-hostpathplugin-zvs7l" [c7e414f0-e537-4283-acbf-6f4e20086035] Running
	I0429 11:37:29.891111 1237617 system_pods.go:89] "etcd-addons-760922" [cfdfca13-5819-4d23-9247-b019d73ef52a] Running
	I0429 11:37:29.891116 1237617 system_pods.go:89] "kindnet-7gjxl" [2f72207f-2fad-412c-bab0-ce62cfb60658] Running
	I0429 11:37:29.891120 1237617 system_pods.go:89] "kube-apiserver-addons-760922" [c671106e-5d08-4cc2-a7fe-85880f52c9bb] Running
	I0429 11:37:29.891125 1237617 system_pods.go:89] "kube-controller-manager-addons-760922" [cf44cb04-07b0-4d0d-90df-e1e8f4500390] Running
	I0429 11:37:29.891134 1237617 system_pods.go:89] "kube-ingress-dns-minikube" [f32d87a8-ebd8-4285-adce-095ad8ceb09b] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0429 11:37:29.891142 1237617 system_pods.go:89] "kube-proxy-w598j" [5f30f9a6-4dff-4f0a-a330-b3776c8936d1] Running
	I0429 11:37:29.891147 1237617 system_pods.go:89] "kube-scheduler-addons-760922" [a64dc54e-fb5a-4f1f-823d-578fdd3e24a9] Running
	I0429 11:37:29.891152 1237617 system_pods.go:89] "metrics-server-c59844bb4-t8bst" [55fed84e-6197-4372-857c-598bbe503660] Running
	I0429 11:37:29.891158 1237617 system_pods.go:89] "nvidia-device-plugin-daemonset-7lk7c" [68690e1d-7f8a-4423-aaed-674894ca372a] Running
	I0429 11:37:29.891165 1237617 system_pods.go:89] "registry-proxy-m8xkv" [33630f46-f313-4cc6-9d44-213e6df0c519] Running
	I0429 11:37:29.891179 1237617 system_pods.go:89] "registry-tj9l7" [9bb8489a-b110-4d66-afb6-a31def145ada] Running
	I0429 11:37:29.891182 1237617 system_pods.go:89] "snapshot-controller-745499f584-dmhbn" [fe7ceb3f-8331-458c-a043-1cf4a4522c0b] Running
	I0429 11:37:29.891187 1237617 system_pods.go:89] "snapshot-controller-745499f584-rkskn" [e83761c3-e638-4da4-978d-42799f1a45fb] Running
	I0429 11:37:29.891192 1237617 system_pods.go:89] "storage-provisioner" [90f23e9d-3062-4587-bf6d-23d10fd60f3c] Running
	I0429 11:37:29.891202 1237617 system_pods.go:126] duration metric: took 10.121403ms to wait for k8s-apps to be running ...
	I0429 11:37:29.891213 1237617 system_svc.go:44] waiting for kubelet service to be running ....
	I0429 11:37:29.891275 1237617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 11:37:29.906267 1237617 system_svc.go:56] duration metric: took 15.043722ms WaitForService to wait for kubelet
	I0429 11:37:29.906302 1237617 kubeadm.go:576] duration metric: took 2m24.265124891s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 11:37:29.906323 1237617 node_conditions.go:102] verifying NodePressure condition ...
	I0429 11:37:29.909599 1237617 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0429 11:37:29.909631 1237617 node_conditions.go:123] node cpu capacity is 2
	I0429 11:37:29.909647 1237617 node_conditions.go:105] duration metric: took 3.318742ms to run NodePressure ...
	I0429 11:37:29.909660 1237617 start.go:240] waiting for startup goroutines ...
	I0429 11:37:29.909674 1237617 start.go:245] waiting for cluster config update ...
	I0429 11:37:29.909694 1237617 start.go:254] writing updated cluster config ...
	I0429 11:37:29.910040 1237617 ssh_runner.go:195] Run: rm -f paused
	I0429 11:37:30.273170 1237617 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0429 11:37:30.275761 1237617 out.go:177] * Done! kubectl is now configured to use "addons-760922" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 29 11:41:19 addons-760922 crio[911]: time="2024-04-29 11:41:19.299563283Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7],Size_:28999827,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=34b2fe23-21e5-46a6-a56e-fb432d1e9d5d name=/runtime.v1.ImageService/ImageStatus
	Apr 29 11:41:19 addons-760922 crio[911]: time="2024-04-29 11:41:19.301671054Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=839b2ed6-a797-473d-b98d-2e660cb3f43f name=/runtime.v1.ImageService/ImageStatus
	Apr 29 11:41:19 addons-760922 crio[911]: time="2024-04-29 11:41:19.301871758Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7],Size_:28999827,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=839b2ed6-a797-473d-b98d-2e660cb3f43f name=/runtime.v1.ImageService/ImageStatus
	Apr 29 11:41:19 addons-760922 crio[911]: time="2024-04-29 11:41:19.302622761Z" level=info msg="Creating container: default/hello-world-app-86c47465fc-cppn7/hello-world-app" id=3f6c879f-b192-4119-b840-33066e368466 name=/runtime.v1.RuntimeService/CreateContainer
	Apr 29 11:41:19 addons-760922 crio[911]: time="2024-04-29 11:41:19.302715134Z" level=warning msg="Allowed annotations are specified for workload []"
	Apr 29 11:41:19 addons-760922 crio[911]: time="2024-04-29 11:41:19.369346444Z" level=info msg="Created container be56a7dbcbd7b4e41fec683b6d446bd4f3cd2ec6586e4a5736c8d86c3691f354: default/hello-world-app-86c47465fc-cppn7/hello-world-app" id=3f6c879f-b192-4119-b840-33066e368466 name=/runtime.v1.RuntimeService/CreateContainer
	Apr 29 11:41:19 addons-760922 crio[911]: time="2024-04-29 11:41:19.370230829Z" level=info msg="Starting container: be56a7dbcbd7b4e41fec683b6d446bd4f3cd2ec6586e4a5736c8d86c3691f354" id=17b91dc7-2af3-42a3-b6d7-28dcf3bc274d name=/runtime.v1.RuntimeService/StartContainer
	Apr 29 11:41:19 addons-760922 crio[911]: time="2024-04-29 11:41:19.379512926Z" level=info msg="Started container" PID=8315 containerID=be56a7dbcbd7b4e41fec683b6d446bd4f3cd2ec6586e4a5736c8d86c3691f354 description=default/hello-world-app-86c47465fc-cppn7/hello-world-app id=17b91dc7-2af3-42a3-b6d7-28dcf3bc274d name=/runtime.v1.RuntimeService/StartContainer sandboxID=3e6b9e6d3b0be04b03da4fe4cb9494943b762e9214f9be7413dbf20d468233ee
	Apr 29 11:41:19 addons-760922 conmon[8304]: conmon be56a7dbcbd7b4e41fec <ninfo>: container 8315 exited with status 1
	Apr 29 11:41:19 addons-760922 crio[911]: time="2024-04-29 11:41:19.439509706Z" level=info msg="Removing container: 0b9c147a28298769f768901225c46fd87ad0901bcb0e356acc093ccb29a76f4a" id=86bf3d3b-e60a-4124-9d0c-3278c27eca0c name=/runtime.v1.RuntimeService/RemoveContainer
	Apr 29 11:41:19 addons-760922 crio[911]: time="2024-04-29 11:41:19.464444571Z" level=info msg="Removed container 0b9c147a28298769f768901225c46fd87ad0901bcb0e356acc093ccb29a76f4a: default/hello-world-app-86c47465fc-cppn7/hello-world-app" id=86bf3d3b-e60a-4124-9d0c-3278c27eca0c name=/runtime.v1.RuntimeService/RemoveContainer
	Apr 29 11:41:21 addons-760922 crio[911]: time="2024-04-29 11:41:21.117509253Z" level=warning msg="Stopping container de4ace128397651486d6e4ffe02b3337c19c8693a7fbd4039b0ed40b11f6f090 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=9be37343-308f-4321-b980-f18ea39de230 name=/runtime.v1.RuntimeService/StopContainer
	Apr 29 11:41:21 addons-760922 conmon[4796]: conmon de4ace128397651486d6 <ninfo>: container 4807 exited with status 137
	Apr 29 11:41:21 addons-760922 crio[911]: time="2024-04-29 11:41:21.257670050Z" level=info msg="Stopped container de4ace128397651486d6e4ffe02b3337c19c8693a7fbd4039b0ed40b11f6f090: ingress-nginx/ingress-nginx-controller-768f948f8f-7nv8c/controller" id=9be37343-308f-4321-b980-f18ea39de230 name=/runtime.v1.RuntimeService/StopContainer
	Apr 29 11:41:21 addons-760922 crio[911]: time="2024-04-29 11:41:21.258193731Z" level=info msg="Stopping pod sandbox: 23f51ea97559d63b5d6f760b116b24ca09caeb5b02403a3b9667478e1243ab25" id=27eac609-ae0a-4742-affb-1bae36486d3f name=/runtime.v1.RuntimeService/StopPodSandbox
	Apr 29 11:41:21 addons-760922 crio[911]: time="2024-04-29 11:41:21.261455489Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-AFFBONJTTJ4EMMMH - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-ITNU2HCD4HMWTBBD - [0:0]\n-X KUBE-HP-AFFBONJTTJ4EMMMH\n-X KUBE-HP-ITNU2HCD4HMWTBBD\nCOMMIT\n"
	Apr 29 11:41:21 addons-760922 crio[911]: time="2024-04-29 11:41:21.269603586Z" level=info msg="Closing host port tcp:80"
	Apr 29 11:41:21 addons-760922 crio[911]: time="2024-04-29 11:41:21.269656730Z" level=info msg="Closing host port tcp:443"
	Apr 29 11:41:21 addons-760922 crio[911]: time="2024-04-29 11:41:21.271063360Z" level=info msg="Host port tcp:80 does not have an open socket"
	Apr 29 11:41:21 addons-760922 crio[911]: time="2024-04-29 11:41:21.271090568Z" level=info msg="Host port tcp:443 does not have an open socket"
	Apr 29 11:41:21 addons-760922 crio[911]: time="2024-04-29 11:41:21.271283576Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-768f948f8f-7nv8c Namespace:ingress-nginx ID:23f51ea97559d63b5d6f760b116b24ca09caeb5b02403a3b9667478e1243ab25 UID:d1003358-7ff8-4c41-9fb4-f5e9b712d810 NetNS:/var/run/netns/cf68b82a-871c-4ff5-b0d6-da7a98ad6e27 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Apr 29 11:41:21 addons-760922 crio[911]: time="2024-04-29 11:41:21.271422324Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-768f948f8f-7nv8c from CNI network \"kindnet\" (type=ptp)"
	Apr 29 11:41:21 addons-760922 crio[911]: time="2024-04-29 11:41:21.298391140Z" level=info msg="Stopped pod sandbox: 23f51ea97559d63b5d6f760b116b24ca09caeb5b02403a3b9667478e1243ab25" id=27eac609-ae0a-4742-affb-1bae36486d3f name=/runtime.v1.RuntimeService/StopPodSandbox
	Apr 29 11:41:21 addons-760922 crio[911]: time="2024-04-29 11:41:21.446544473Z" level=info msg="Removing container: de4ace128397651486d6e4ffe02b3337c19c8693a7fbd4039b0ed40b11f6f090" id=99d10e44-3219-4c3a-8dea-549480b9b7fb name=/runtime.v1.RuntimeService/RemoveContainer
	Apr 29 11:41:21 addons-760922 crio[911]: time="2024-04-29 11:41:21.460661620Z" level=info msg="Removed container de4ace128397651486d6e4ffe02b3337c19c8693a7fbd4039b0ed40b11f6f090: ingress-nginx/ingress-nginx-controller-768f948f8f-7nv8c/controller" id=99d10e44-3219-4c3a-8dea-549480b9b7fb name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	be56a7dbcbd7b       dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79                                                             7 seconds ago       Exited              hello-world-app           2                   3e6b9e6d3b0be       hello-world-app-86c47465fc-cppn7
	5dcda6f95d8c2       docker.io/library/nginx@sha256:1f37baf7373d386ee9de0437325ae3e0202a3959803fd79144fa0bb27e2b2801                              2 minutes ago       Running             nginx                     0                   3d47ec3f5ea3c       nginx
	ff63c37cdf5c6       ghcr.io/headlamp-k8s/headlamp@sha256:1f277f42730106526a27560517a4c5f9253ccb2477be458986f44a791158a02c                        3 minutes ago       Running             headlamp                  0                   3bd7926cee370       headlamp-7559bf459f-8k4nc
	27a8b2ead9827       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:a40e1a121ee367d1712ac3a54ec9c38c405a65dde923c98e5fa6368fa82c4b69                 4 minutes ago       Running             gcp-auth                  0                   b3fcccd59b9ee       gcp-auth-5db96cd9b4-ls88f
	972973edd7a4f       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                              5 minutes ago       Running             yakd                      0                   cdcaa68701c7d       yakd-dashboard-5ddbf7d777-jjvnb
	a8699a382a201       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:36d05b4077fb8e3d13663702fa337f124675ba8667cbd949c03a8e8ea6fa4366   5 minutes ago       Exited              patch                     0                   edae3fa20eea9       ingress-nginx-admission-patch-xjxmr
	99a5577a3140a       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:36d05b4077fb8e3d13663702fa337f124675ba8667cbd949c03a8e8ea6fa4366   5 minutes ago       Exited              create                    0                   977f327676cc3       ingress-nginx-admission-create-7sqcs
	052392dd63f2a       registry.k8s.io/metrics-server/metrics-server@sha256:7f0fc3565b6d4655d078bb8e250d0423d7c79aeb05fbc71e1ffa6ff664264d70        5 minutes ago       Running             metrics-server            0                   2b42b7d5388f5       metrics-server-c59844bb4-t8bst
	2889233ebbfde       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98             5 minutes ago       Running             local-path-provisioner    0                   00dc4dc432ca7       local-path-provisioner-8d985888d-jvmjz
	ea33593fdd912       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             5 minutes ago       Running             storage-provisioner       0                   0095105c319d5       storage-provisioner
	f5aa390616f68       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93                                                             5 minutes ago       Running             coredns                   0                   f10b1db56a3bb       coredns-7db6d8ff4d-hsk8z
	f6d37371c711b       cb7eac0b42cc1efe8ef8d69652c7c0babbf9ab418daca7fe90ddb8b1ab68389f                                                             6 minutes ago       Running             kube-proxy                0                   f3188134a23b4       kube-proxy-w598j
	d1c2bca574223       4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d                                                             6 minutes ago       Running             kindnet-cni               0                   36a1c944f17c0       kindnet-7gjxl
	18d8a3169373e       014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd                                                             6 minutes ago       Running             etcd                      0                   8ddcb5be7043b       etcd-addons-760922
	0b4bf008d8310       547adae34140be47cdc0d9f3282b6184ef76154c44cf43fc7edd0685e61ab73a                                                             6 minutes ago       Running             kube-scheduler            0                   8eab0f38d946a       kube-scheduler-addons-760922
	836169ee36c10       68feac521c0f104bef927614ce0960d6fcddf98bd42f039c98b7d4a82294d6f1                                                             6 minutes ago       Running             kube-controller-manager   0                   bd9885044a0ff       kube-controller-manager-addons-760922
	a7a7309bbe879       181f57fd3cdb796d3b94d5a1c86bf48ec261d75965d1b7c328f1d7c11f79f0bb                                                             6 minutes ago       Running             kube-apiserver            0                   269a5fc33ea83       kube-apiserver-addons-760922
	
	
	==> coredns [f5aa390616f688ef8f9e965990765e53de3623070f536ce57d6c39292d45b853] <==
	[INFO] 10.244.0.19:41444 - 21642 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000059799s
	[INFO] 10.244.0.19:41444 - 47992 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.004544942s
	[INFO] 10.244.0.19:42204 - 60956 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00595076s
	[INFO] 10.244.0.19:41444 - 31957 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001960933s
	[INFO] 10.244.0.19:41444 - 39504 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000117226s
	[INFO] 10.244.0.19:42204 - 40731 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001193528s
	[INFO] 10.244.0.19:42204 - 26944 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000068275s
	[INFO] 10.244.0.19:58214 - 34023 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000181127s
	[INFO] 10.244.0.19:47300 - 63375 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000168368s
	[INFO] 10.244.0.19:47300 - 55534 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000080795s
	[INFO] 10.244.0.19:58214 - 16367 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000043413s
	[INFO] 10.244.0.19:47300 - 29214 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000074839s
	[INFO] 10.244.0.19:47300 - 17173 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000060258s
	[INFO] 10.244.0.19:58214 - 49839 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000049682s
	[INFO] 10.244.0.19:47300 - 5093 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00005682s
	[INFO] 10.244.0.19:58214 - 53864 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000055376s
	[INFO] 10.244.0.19:47300 - 2823 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000094883s
	[INFO] 10.244.0.19:58214 - 51116 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000058757s
	[INFO] 10.244.0.19:58214 - 41016 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000053809s
	[INFO] 10.244.0.19:47300 - 28938 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001655909s
	[INFO] 10.244.0.19:58214 - 17059 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001247837s
	[INFO] 10.244.0.19:47300 - 55826 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001225109s
	[INFO] 10.244.0.19:47300 - 28929 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000073723s
	[INFO] 10.244.0.19:58214 - 22885 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001166976s
	[INFO] 10.244.0.19:58214 - 36751 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000047664s
	
	
	==> describe nodes <==
	Name:               addons-760922
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-760922
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8dbdf0965c47dd73171cfc89e1a9c75505f7a22d
	                    minikube.k8s.io/name=addons-760922
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_29T11_34_52_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-760922
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 11:34:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-760922
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 11:41:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 11:38:57 +0000   Mon, 29 Apr 2024 11:34:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 11:38:57 +0000   Mon, 29 Apr 2024 11:34:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 11:38:57 +0000   Mon, 29 Apr 2024 11:34:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 11:38:57 +0000   Mon, 29 Apr 2024 11:35:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-760922
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	System Info:
	  Machine ID:                 6400e2c8e40e41409106f64aaa4c5941
	  System UUID:                3b9fc9c3-4dee-4215-86b3-ff01ad01914f
	  Boot ID:                    b8f2360a-0b19-4e04-aa8c-604719eae8f1
	  Kernel Version:             5.15.0-1058-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                      ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-86c47465fc-cppn7          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m46s
	  gcp-auth                    gcp-auth-5db96cd9b4-ls88f                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m11s
	  headlamp                    headlamp-7559bf459f-8k4nc                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m10s
	  kube-system                 coredns-7db6d8ff4d-hsk8z                  100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     6m21s
	  kube-system                 etcd-addons-760922                        100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         6m35s
	  kube-system                 kindnet-7gjxl                             100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      6m21s
	  kube-system                 kube-apiserver-addons-760922              250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m35s
	  kube-system                 kube-controller-manager-addons-760922     200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m35s
	  kube-system                 kube-proxy-w598j                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m21s
	  kube-system                 kube-scheduler-addons-760922              100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m35s
	  kube-system                 metrics-server-c59844bb4-t8bst            100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (2%!)(MISSING)       0 (0%!)(MISSING)         6m16s
	  kube-system                 storage-provisioner                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m16s
	  local-path-storage          local-path-provisioner-8d985888d-jvmjz    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m16s
	  yakd-dashboard              yakd-dashboard-5ddbf7d777-jjvnb           0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     6m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%!)(MISSING)  100m (5%!)(MISSING)
	  memory             548Mi (6%!)(MISSING)  476Mi (6%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m16s                  kube-proxy       
	  Normal  Starting                 6m42s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m42s (x8 over 6m42s)  kubelet          Node addons-760922 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m42s (x8 over 6m42s)  kubelet          Node addons-760922 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m42s (x8 over 6m42s)  kubelet          Node addons-760922 status is now: NodeHasSufficientPID
	  Normal  Starting                 6m35s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m35s                  kubelet          Node addons-760922 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m35s                  kubelet          Node addons-760922 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m35s                  kubelet          Node addons-760922 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m21s                  node-controller  Node addons-760922 event: Registered Node addons-760922 in Controller
	  Normal  NodeReady                5m48s                  kubelet          Node addons-760922 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.001075] FS-Cache: O-key=[8] 'c63e5c0100000000'
	[  +0.000727] FS-Cache: N-cookie c=00000042 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000936] FS-Cache: N-cookie d=00000000a8cdaa2f{9p.inode} n=000000002cddfdcd
	[  +0.001167] FS-Cache: N-key=[8] 'c63e5c0100000000'
	[  +0.002652] FS-Cache: Duplicate cookie detected
	[  +0.000752] FS-Cache: O-cookie c=0000003c [p=00000039 fl=226 nc=0 na=1]
	[  +0.000953] FS-Cache: O-cookie d=00000000a8cdaa2f{9p.inode} n=00000000b5edf900
	[  +0.001050] FS-Cache: O-key=[8] 'c63e5c0100000000'
	[  +0.000708] FS-Cache: N-cookie c=00000043 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000980] FS-Cache: N-cookie d=00000000a8cdaa2f{9p.inode} n=000000006f4e83dc
	[  +0.001064] FS-Cache: N-key=[8] 'c63e5c0100000000'
	[  +3.263862] FS-Cache: Duplicate cookie detected
	[  +0.000761] FS-Cache: O-cookie c=0000003a [p=00000039 fl=226 nc=0 na=1]
	[  +0.000990] FS-Cache: O-cookie d=00000000a8cdaa2f{9p.inode} n=000000005910e427
	[  +0.001043] FS-Cache: O-key=[8] 'c53e5c0100000000'
	[  +0.000806] FS-Cache: N-cookie c=00000045 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000969] FS-Cache: N-cookie d=00000000a8cdaa2f{9p.inode} n=00000000c61f21fc
	[  +0.001058] FS-Cache: N-key=[8] 'c53e5c0100000000'
	[  +0.258281] FS-Cache: Duplicate cookie detected
	[  +0.000808] FS-Cache: O-cookie c=0000003f [p=00000039 fl=226 nc=0 na=1]
	[  +0.000970] FS-Cache: O-cookie d=00000000a8cdaa2f{9p.inode} n=00000000820f26a0
	[  +0.001043] FS-Cache: O-key=[8] 'cb3e5c0100000000'
	[  +0.000760] FS-Cache: N-cookie c=00000046 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000937] FS-Cache: N-cookie d=00000000a8cdaa2f{9p.inode} n=00000000bb4b91ea
	[  +0.001031] FS-Cache: N-key=[8] 'cb3e5c0100000000'
	
	
	==> etcd [18d8a3169373e74dc7d6856fbd9da3eac329d95311f62caae457e2f407b38092] <==
	{"level":"info","ts":"2024-04-29T11:34:44.834905Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-29T11:34:45.796721Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-04-29T11:34:45.796836Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-04-29T11:34:45.796882Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-04-29T11:34:45.796936Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-04-29T11:34:45.796967Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-04-29T11:34:45.797003Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-04-29T11:34:45.797036Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-04-29T11:34:45.799049Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-760922 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-29T11:34:45.799128Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T11:34:45.799439Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T11:34:45.799556Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T11:34:45.806367Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-29T11:34:45.808427Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T11:34:45.808573Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T11:34:45.809481Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T11:34:45.830421Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-04-29T11:34:45.836753Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-29T11:34:45.83684Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-29T11:35:07.308711Z","caller":"traceutil/trace.go:171","msg":"trace[1688005631] transaction","detail":"{read_only:false; response_revision:392; number_of_response:1; }","duration":"386.89247ms","start":"2024-04-29T11:35:06.921796Z","end":"2024-04-29T11:35:07.308689Z","steps":["trace[1688005631] 'process raft request'  (duration: 341.270412ms)","trace[1688005631] 'compare'  (duration: 45.39106ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-29T11:35:07.310163Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T11:35:06.921776Z","time spent":"388.004488ms","remote":"127.0.0.1:59276","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":707,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/kindnet-7gjxl.17cabd1530c7590e\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kindnet-7gjxl.17cabd1530c7590e\" value_size:630 lease:8128028836484299275 >> failure:<>"}
	{"level":"info","ts":"2024-04-29T11:35:07.31065Z","caller":"traceutil/trace.go:171","msg":"trace[860246016] transaction","detail":"{read_only:false; response_revision:393; number_of_response:1; }","duration":"138.081324ms","start":"2024-04-29T11:35:07.172555Z","end":"2024-04-29T11:35:07.310636Z","steps":["trace[860246016] 'process raft request'  (duration: 136.048851ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T11:35:09.244469Z","caller":"traceutil/trace.go:171","msg":"trace[1895410619] transaction","detail":"{read_only:false; response_revision:406; number_of_response:1; }","duration":"135.287222ms","start":"2024-04-29T11:35:09.109167Z","end":"2024-04-29T11:35:09.244454Z","steps":["trace[1895410619] 'process raft request'  (duration: 48.88677ms)","trace[1895410619] 'compare'  (duration: 86.224642ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-29T11:35:09.945871Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.808979ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-760922\" ","response":"range_response_count:1 size:5744"}
	{"level":"info","ts":"2024-04-29T11:35:09.946005Z","caller":"traceutil/trace.go:171","msg":"trace[346994486] range","detail":"{range_begin:/registry/minions/addons-760922; range_end:; response_count:1; response_revision:461; }","duration":"107.950918ms","start":"2024-04-29T11:35:09.838041Z","end":"2024-04-29T11:35:09.945992Z","steps":["trace[346994486] 'agreement among raft nodes before linearized reading'  (duration: 91.782387ms)","trace[346994486] 'get authentication metadata'  (duration: 15.96452ms)"],"step_count":2}
	
	
	==> gcp-auth [27a8b2ead9827a1abec94db6cbca613a00f45bbccfdbe9469ca8ac31e5fe2e4f] <==
	2024/04/29 11:36:45 GCP Auth Webhook started!
	2024/04/29 11:37:39 Ready to marshal response ...
	2024/04/29 11:37:39 Ready to write response ...
	2024/04/29 11:37:41 Ready to marshal response ...
	2024/04/29 11:37:41 Ready to write response ...
	2024/04/29 11:38:00 Ready to marshal response ...
	2024/04/29 11:38:00 Ready to write response ...
	2024/04/29 11:38:00 Ready to marshal response ...
	2024/04/29 11:38:00 Ready to write response ...
	2024/04/29 11:38:05 Ready to marshal response ...
	2024/04/29 11:38:05 Ready to write response ...
	2024/04/29 11:38:08 Ready to marshal response ...
	2024/04/29 11:38:08 Ready to write response ...
	2024/04/29 11:38:16 Ready to marshal response ...
	2024/04/29 11:38:16 Ready to write response ...
	2024/04/29 11:38:16 Ready to marshal response ...
	2024/04/29 11:38:16 Ready to write response ...
	2024/04/29 11:38:16 Ready to marshal response ...
	2024/04/29 11:38:16 Ready to write response ...
	2024/04/29 11:38:40 Ready to marshal response ...
	2024/04/29 11:38:40 Ready to write response ...
	2024/04/29 11:41:00 Ready to marshal response ...
	2024/04/29 11:41:00 Ready to write response ...
	
	
	==> kernel <==
	 11:41:26 up  7:23,  0 users,  load average: 0.32, 1.22, 2.49
	Linux addons-760922 5.15.0-1058-aws #64~20.04.1-Ubuntu SMP Tue Apr 9 11:11:55 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [d1c2bca5742231f59a4e7d5ebfca2b0ff8a43e410c36f420cb302b1714fcb172] <==
	I0429 11:39:18.458083       1 main.go:227] handling current node
	I0429 11:39:28.468661       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0429 11:39:28.468794       1 main.go:227] handling current node
	I0429 11:39:38.472763       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0429 11:39:38.472789       1 main.go:227] handling current node
	I0429 11:39:48.485143       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0429 11:39:48.485170       1 main.go:227] handling current node
	I0429 11:39:58.494149       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0429 11:39:58.494175       1 main.go:227] handling current node
	I0429 11:40:08.507086       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0429 11:40:08.507113       1 main.go:227] handling current node
	I0429 11:40:18.513441       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0429 11:40:18.513476       1 main.go:227] handling current node
	I0429 11:40:28.519824       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0429 11:40:28.519852       1 main.go:227] handling current node
	I0429 11:40:38.524195       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0429 11:40:38.524237       1 main.go:227] handling current node
	I0429 11:40:48.528325       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0429 11:40:48.528356       1 main.go:227] handling current node
	I0429 11:40:58.534851       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0429 11:40:58.534881       1 main.go:227] handling current node
	I0429 11:41:08.539376       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0429 11:41:08.539402       1 main.go:227] handling current node
	I0429 11:41:18.550904       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0429 11:41:18.550932       1 main.go:227] handling current node
	
	
	==> kube-apiserver [a7a7309bbe8794b408f94c36d53409e0b22d3fc5e1565bf369b327e3b1812123] <==
	E0429 11:36:56.351351       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.55.5:443/apis/metrics.k8s.io/v1beta1: Get "https://10.103.55.5:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.103.55.5:443: connect: connection refused
	W0429 11:36:56.351803       1 handler_proxy.go:93] no RequestInfo found in the context
	E0429 11:36:56.351873       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0429 11:36:56.404005       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again
	I0429 11:36:56.410153       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0429 11:37:51.873787       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0429 11:37:53.872597       1 watch.go:250] http2: stream closed
	I0429 11:38:16.760954       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.98.54.230"}
	I0429 11:38:21.658424       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0429 11:38:21.658473       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0429 11:38:21.692154       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0429 11:38:21.692282       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0429 11:38:21.729677       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0429 11:38:21.729892       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0429 11:38:21.804127       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0429 11:38:21.804169       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0429 11:38:22.754595       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0429 11:38:22.804654       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0429 11:38:22.831004       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0429 11:38:34.353320       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0429 11:38:35.380027       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0429 11:38:39.871105       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0429 11:38:40.212402       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.99.250.226"}
	I0429 11:41:01.069296       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.111.113.32"}
	
	
	==> kube-controller-manager [836169ee36c10d25a5b392d25a39dc752379d87a719cdb052c530d840b879fb4] <==
	E0429 11:39:41.908155       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0429 11:40:02.055422       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0429 11:40:02.055463       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0429 11:40:13.039178       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0429 11:40:13.039217       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0429 11:40:30.164406       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0429 11:40:30.164537       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0429 11:40:34.445202       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0429 11:40:34.445238       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0429 11:40:57.260166       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0429 11:40:57.260206       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0429 11:41:00.839805       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="54.859643ms"
	I0429 11:41:00.847378       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="7.528589ms"
	I0429 11:41:00.847467       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="45.777µs"
	I0429 11:41:04.420002       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="34.97µs"
	I0429 11:41:05.431394       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="41.92µs"
	I0429 11:41:06.432328       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="43.265µs"
	W0429 11:41:08.368113       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0429 11:41:08.368152       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0429 11:41:16.620414       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0429 11:41:16.620457       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0429 11:41:18.083418       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create"
	I0429 11:41:18.089469       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	I0429 11:41:18.089604       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-768f948f8f" duration="17.567µs"
	I0429 11:41:19.460026       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="41.985µs"
	
	
	==> kube-proxy [f6d37371c711ba1179bb1a3cdba8018b5ce1f590a9bd4536719eb0f8757eb648] <==
	I0429 11:35:09.640055       1 server_linux.go:69] "Using iptables proxy"
	I0429 11:35:10.229193       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0429 11:35:10.401417       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0429 11:35:10.401471       1 server_linux.go:165] "Using iptables Proxier"
	I0429 11:35:10.403953       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0429 11:35:10.403985       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0429 11:35:10.404008       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 11:35:10.404197       1 server.go:872] "Version info" version="v1.30.0"
	I0429 11:35:10.404218       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 11:35:10.408579       1 config.go:192] "Starting service config controller"
	I0429 11:35:10.408610       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 11:35:10.408648       1 config.go:101] "Starting endpoint slice config controller"
	I0429 11:35:10.408652       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 11:35:10.409182       1 config.go:319] "Starting node config controller"
	I0429 11:35:10.409998       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 11:35:10.516783       1 shared_informer.go:320] Caches are synced for node config
	I0429 11:35:10.517566       1 shared_informer.go:320] Caches are synced for service config
	I0429 11:35:10.517592       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [0b4bf008d831035b0b1bdbf378c65024e650de9cc13691a43604af72b5561998] <==
	W0429 11:34:49.174032       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0429 11:34:49.174070       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0429 11:34:49.174142       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0429 11:34:49.174183       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0429 11:34:49.174317       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0429 11:34:49.174361       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0429 11:34:49.174468       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0429 11:34:49.174508       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0429 11:34:49.174577       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0429 11:34:49.174614       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0429 11:34:49.174687       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0429 11:34:49.174733       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0429 11:34:49.175159       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0429 11:34:49.175225       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0429 11:34:49.175300       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0429 11:34:49.175341       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0429 11:34:49.175445       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0429 11:34:49.177258       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0429 11:34:49.176803       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0429 11:34:49.177391       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0429 11:34:49.176844       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0429 11:34:49.177492       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0429 11:34:49.990611       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0429 11:34:49.990654       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0429 11:34:50.667853       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 29 11:41:06 addons-760922 kubelet[1488]: E0429 11:41:06.408426    1488 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 10s restarting failed container=hello-world-app pod=hello-world-app-86c47465fc-cppn7_default(a52dcf22-6930-49b0-a486-c54c89f75497)\"" pod="default/hello-world-app-86c47465fc-cppn7" podUID="a52dcf22-6930-49b0-a486-c54c89f75497"
	Apr 29 11:41:08 addons-760922 kubelet[1488]: I0429 11:41:08.299012    1488 scope.go:117] "RemoveContainer" containerID="69a71a9e68743b6ff90d1043842bdcd1148bb8e7ce7c867c9c176d39cc251e7b"
	Apr 29 11:41:08 addons-760922 kubelet[1488]: E0429 11:41:08.299280    1488 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(f32d87a8-ebd8-4285-adce-095ad8ceb09b)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="f32d87a8-ebd8-4285-adce-095ad8ceb09b"
	Apr 29 11:41:16 addons-760922 kubelet[1488]: I0429 11:41:16.925328    1488 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j75fh\" (UniqueName: \"kubernetes.io/projected/f32d87a8-ebd8-4285-adce-095ad8ceb09b-kube-api-access-j75fh\") pod \"f32d87a8-ebd8-4285-adce-095ad8ceb09b\" (UID: \"f32d87a8-ebd8-4285-adce-095ad8ceb09b\") "
	Apr 29 11:41:16 addons-760922 kubelet[1488]: I0429 11:41:16.927434    1488 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f32d87a8-ebd8-4285-adce-095ad8ceb09b-kube-api-access-j75fh" (OuterVolumeSpecName: "kube-api-access-j75fh") pod "f32d87a8-ebd8-4285-adce-095ad8ceb09b" (UID: "f32d87a8-ebd8-4285-adce-095ad8ceb09b"). InnerVolumeSpecName "kube-api-access-j75fh". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 29 11:41:17 addons-760922 kubelet[1488]: I0429 11:41:17.026183    1488 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-j75fh\" (UniqueName: \"kubernetes.io/projected/f32d87a8-ebd8-4285-adce-095ad8ceb09b-kube-api-access-j75fh\") on node \"addons-760922\" DevicePath \"\""
	Apr 29 11:41:17 addons-760922 kubelet[1488]: I0429 11:41:17.431202    1488 scope.go:117] "RemoveContainer" containerID="69a71a9e68743b6ff90d1043842bdcd1148bb8e7ce7c867c9c176d39cc251e7b"
	Apr 29 11:41:19 addons-760922 kubelet[1488]: I0429 11:41:19.298827    1488 scope.go:117] "RemoveContainer" containerID="0b9c147a28298769f768901225c46fd87ad0901bcb0e356acc093ccb29a76f4a"
	Apr 29 11:41:19 addons-760922 kubelet[1488]: I0429 11:41:19.300329    1488 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae23e4a5-3bd2-4e4d-9d2a-6b76e981f481" path="/var/lib/kubelet/pods/ae23e4a5-3bd2-4e4d-9d2a-6b76e981f481/volumes"
	Apr 29 11:41:19 addons-760922 kubelet[1488]: I0429 11:41:19.300786    1488 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be2d171f-ea8f-4fbe-a6c7-ded7b588d965" path="/var/lib/kubelet/pods/be2d171f-ea8f-4fbe-a6c7-ded7b588d965/volumes"
	Apr 29 11:41:19 addons-760922 kubelet[1488]: I0429 11:41:19.301154    1488 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f32d87a8-ebd8-4285-adce-095ad8ceb09b" path="/var/lib/kubelet/pods/f32d87a8-ebd8-4285-adce-095ad8ceb09b/volumes"
	Apr 29 11:41:19 addons-760922 kubelet[1488]: I0429 11:41:19.437694    1488 scope.go:117] "RemoveContainer" containerID="0b9c147a28298769f768901225c46fd87ad0901bcb0e356acc093ccb29a76f4a"
	Apr 29 11:41:19 addons-760922 kubelet[1488]: I0429 11:41:19.437921    1488 scope.go:117] "RemoveContainer" containerID="be56a7dbcbd7b4e41fec683b6d446bd4f3cd2ec6586e4a5736c8d86c3691f354"
	Apr 29 11:41:19 addons-760922 kubelet[1488]: E0429 11:41:19.438175    1488 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 20s restarting failed container=hello-world-app pod=hello-world-app-86c47465fc-cppn7_default(a52dcf22-6930-49b0-a486-c54c89f75497)\"" pod="default/hello-world-app-86c47465fc-cppn7" podUID="a52dcf22-6930-49b0-a486-c54c89f75497"
	Apr 29 11:41:21 addons-760922 kubelet[1488]: I0429 11:41:21.445106    1488 scope.go:117] "RemoveContainer" containerID="de4ace128397651486d6e4ffe02b3337c19c8693a7fbd4039b0ed40b11f6f090"
	Apr 29 11:41:21 addons-760922 kubelet[1488]: I0429 11:41:21.458581    1488 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v85x9\" (UniqueName: \"kubernetes.io/projected/d1003358-7ff8-4c41-9fb4-f5e9b712d810-kube-api-access-v85x9\") pod \"d1003358-7ff8-4c41-9fb4-f5e9b712d810\" (UID: \"d1003358-7ff8-4c41-9fb4-f5e9b712d810\") "
	Apr 29 11:41:21 addons-760922 kubelet[1488]: I0429 11:41:21.458633    1488 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d1003358-7ff8-4c41-9fb4-f5e9b712d810-webhook-cert\") pod \"d1003358-7ff8-4c41-9fb4-f5e9b712d810\" (UID: \"d1003358-7ff8-4c41-9fb4-f5e9b712d810\") "
	Apr 29 11:41:21 addons-760922 kubelet[1488]: I0429 11:41:21.461201    1488 scope.go:117] "RemoveContainer" containerID="de4ace128397651486d6e4ffe02b3337c19c8693a7fbd4039b0ed40b11f6f090"
	Apr 29 11:41:21 addons-760922 kubelet[1488]: I0429 11:41:21.463072    1488 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1003358-7ff8-4c41-9fb4-f5e9b712d810-kube-api-access-v85x9" (OuterVolumeSpecName: "kube-api-access-v85x9") pod "d1003358-7ff8-4c41-9fb4-f5e9b712d810" (UID: "d1003358-7ff8-4c41-9fb4-f5e9b712d810"). InnerVolumeSpecName "kube-api-access-v85x9". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 29 11:41:21 addons-760922 kubelet[1488]: I0429 11:41:21.463324    1488 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1003358-7ff8-4c41-9fb4-f5e9b712d810-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "d1003358-7ff8-4c41-9fb4-f5e9b712d810" (UID: "d1003358-7ff8-4c41-9fb4-f5e9b712d810"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Apr 29 11:41:21 addons-760922 kubelet[1488]: E0429 11:41:21.468804    1488 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de4ace128397651486d6e4ffe02b3337c19c8693a7fbd4039b0ed40b11f6f090\": container with ID starting with de4ace128397651486d6e4ffe02b3337c19c8693a7fbd4039b0ed40b11f6f090 not found: ID does not exist" containerID="de4ace128397651486d6e4ffe02b3337c19c8693a7fbd4039b0ed40b11f6f090"
	Apr 29 11:41:21 addons-760922 kubelet[1488]: I0429 11:41:21.468858    1488 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de4ace128397651486d6e4ffe02b3337c19c8693a7fbd4039b0ed40b11f6f090"} err="failed to get container status \"de4ace128397651486d6e4ffe02b3337c19c8693a7fbd4039b0ed40b11f6f090\": rpc error: code = NotFound desc = could not find container \"de4ace128397651486d6e4ffe02b3337c19c8693a7fbd4039b0ed40b11f6f090\": container with ID starting with de4ace128397651486d6e4ffe02b3337c19c8693a7fbd4039b0ed40b11f6f090 not found: ID does not exist"
	Apr 29 11:41:21 addons-760922 kubelet[1488]: I0429 11:41:21.559485    1488 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d1003358-7ff8-4c41-9fb4-f5e9b712d810-webhook-cert\") on node \"addons-760922\" DevicePath \"\""
	Apr 29 11:41:21 addons-760922 kubelet[1488]: I0429 11:41:21.559529    1488 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-v85x9\" (UniqueName: \"kubernetes.io/projected/d1003358-7ff8-4c41-9fb4-f5e9b712d810-kube-api-access-v85x9\") on node \"addons-760922\" DevicePath \"\""
	Apr 29 11:41:23 addons-760922 kubelet[1488]: I0429 11:41:23.299891    1488 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1003358-7ff8-4c41-9fb4-f5e9b712d810" path="/var/lib/kubelet/pods/d1003358-7ff8-4c41-9fb4-f5e9b712d810/volumes"
	
	
	==> storage-provisioner [ea33593fdd9125a40506b352fa2ad2a69677db18151afaf9d4dc618415b44b54] <==
	I0429 11:35:39.730021       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0429 11:35:39.807463       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0429 11:35:39.807512       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0429 11:35:39.817005       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0429 11:35:39.817350       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-760922_6c501549-51d6-4626-92bc-f0d5b966207a!
	I0429 11:35:39.817453       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"274d0fc0-fa2f-4f39-a170-3e2ee4d9c376", APIVersion:"v1", ResourceVersion:"925", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-760922_6c501549-51d6-4626-92bc-f0d5b966207a became leader
	I0429 11:35:39.917827       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-760922_6c501549-51d6-4626-92bc-f0d5b966207a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-760922 -n addons-760922
helpers_test.go:261: (dbg) Run:  kubectl --context addons-760922 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (168.12s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (325.67s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 3.688217ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-t8bst" [55fed84e-6197-4372-857c-598bbe503660] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004616872s
addons_test.go:415: (dbg) Run:  kubectl --context addons-760922 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-760922 top pods -n kube-system: exit status 1 (119.279626ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-hsk8z, age: 3m22.092980054s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-760922 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-760922 top pods -n kube-system: exit status 1 (83.395358ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-hsk8z, age: 3m24.840886378s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-760922 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-760922 top pods -n kube-system: exit status 1 (79.098201ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-hsk8z, age: 3m27.815845623s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-760922 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-760922 top pods -n kube-system: exit status 1 (118.191066ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-hsk8z, age: 3m37.517721872s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-760922 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-760922 top pods -n kube-system: exit status 1 (87.163879ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-hsk8z, age: 3m42.72275987s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-760922 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-760922 top pods -n kube-system: exit status 1 (84.561391ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-hsk8z, age: 3m53.285234806s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-760922 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-760922 top pods -n kube-system: exit status 1 (89.739289ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-hsk8z, age: 4m12.177237597s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-760922 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-760922 top pods -n kube-system: exit status 1 (203.591813ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-hsk8z, age: 4m55.454696132s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-760922 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-760922 top pods -n kube-system: exit status 1 (83.062192ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-hsk8z, age: 6m5.677662553s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-760922 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-760922 top pods -n kube-system: exit status 1 (85.452942ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-hsk8z, age: 7m18.906760602s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-760922 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-760922 top pods -n kube-system: exit status 1 (92.653034ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-hsk8z, age: 8m39.464490311s

                                                
                                                
** /stderr **
addons_test.go:429: failed checking metric server: exit status 1
addons_test.go:432: (dbg) Run:  out/minikube-linux-arm64 -p addons-760922 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/MetricsServer]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-760922
helpers_test.go:235: (dbg) docker inspect addons-760922:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "acf70231910d5cd4ee4713c17db090abfc96713bb9ed8bf6949a750ff680fdbf",
	        "Created": "2024-04-29T11:34:28.033467642Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1238069,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-04-29T11:34:28.328910908Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c9315e0f61546d7b9630cf89252fa7f614fc966830e816cca5333df5c944376f",
	        "ResolvConfPath": "/var/lib/docker/containers/acf70231910d5cd4ee4713c17db090abfc96713bb9ed8bf6949a750ff680fdbf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/acf70231910d5cd4ee4713c17db090abfc96713bb9ed8bf6949a750ff680fdbf/hostname",
	        "HostsPath": "/var/lib/docker/containers/acf70231910d5cd4ee4713c17db090abfc96713bb9ed8bf6949a750ff680fdbf/hosts",
	        "LogPath": "/var/lib/docker/containers/acf70231910d5cd4ee4713c17db090abfc96713bb9ed8bf6949a750ff680fdbf/acf70231910d5cd4ee4713c17db090abfc96713bb9ed8bf6949a750ff680fdbf-json.log",
	        "Name": "/addons-760922",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-760922:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-760922",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/832d143a181c8a5d725e9a1a80315a6f7477608759df936cc5c4f8d624673f4d-init/diff:/var/lib/docker/overlay2/99267fe96688a6fee0a92469b55a9da51d73214dc11fc371bf5149dbc069c731/diff",
	                "MergedDir": "/var/lib/docker/overlay2/832d143a181c8a5d725e9a1a80315a6f7477608759df936cc5c4f8d624673f4d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/832d143a181c8a5d725e9a1a80315a6f7477608759df936cc5c4f8d624673f4d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/832d143a181c8a5d725e9a1a80315a6f7477608759df936cc5c4f8d624673f4d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-760922",
	                "Source": "/var/lib/docker/volumes/addons-760922/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-760922",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-760922",
	                "name.minikube.sigs.k8s.io": "addons-760922",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f107ebcfb47a9ccc41c58d28aacc2d32162c73103ef133439caf0386f289eac8",
	            "SandboxKey": "/var/run/docker/netns/f107ebcfb47a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34278"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34277"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34274"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34276"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34275"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-760922": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "aa18007d61fa281b0d63804776f16a7a9362ec2b322dcf76ee719f08f8b5b429",
	                    "EndpointID": "eac6c9ff872b0265ff91315e23dd5f90d8d58340ae76bc3113f89b2a6f110287",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "addons-760922",
	                        "acf70231910d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-760922 -n addons-760922
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-760922 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-760922 logs -n 25: (1.530427917s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-665613                                                                     | download-only-665613   | jenkins | v1.33.0 | 29 Apr 24 11:34 UTC | 29 Apr 24 11:34 UTC |
	| delete  | -p download-only-895081                                                                     | download-only-895081   | jenkins | v1.33.0 | 29 Apr 24 11:34 UTC | 29 Apr 24 11:34 UTC |
	| delete  | -p download-only-665613                                                                     | download-only-665613   | jenkins | v1.33.0 | 29 Apr 24 11:34 UTC | 29 Apr 24 11:34 UTC |
	| start   | --download-only -p                                                                          | download-docker-209390 | jenkins | v1.33.0 | 29 Apr 24 11:34 UTC |                     |
	|         | download-docker-209390                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-209390                                                                   | download-docker-209390 | jenkins | v1.33.0 | 29 Apr 24 11:34 UTC | 29 Apr 24 11:34 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-725376   | jenkins | v1.33.0 | 29 Apr 24 11:34 UTC |                     |
	|         | binary-mirror-725376                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:45633                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-725376                                                                     | binary-mirror-725376   | jenkins | v1.33.0 | 29 Apr 24 11:34 UTC | 29 Apr 24 11:34 UTC |
	| addons  | enable dashboard -p                                                                         | addons-760922          | jenkins | v1.33.0 | 29 Apr 24 11:34 UTC |                     |
	|         | addons-760922                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-760922          | jenkins | v1.33.0 | 29 Apr 24 11:34 UTC |                     |
	|         | addons-760922                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-760922 --wait=true                                                                | addons-760922          | jenkins | v1.33.0 | 29 Apr 24 11:34 UTC | 29 Apr 24 11:37 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --driver=docker                                                               |                        |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| ip      | addons-760922 ip                                                                            | addons-760922          | jenkins | v1.33.0 | 29 Apr 24 11:37 UTC | 29 Apr 24 11:37 UTC |
	| addons  | addons-760922 addons disable                                                                | addons-760922          | jenkins | v1.33.0 | 29 Apr 24 11:37 UTC | 29 Apr 24 11:37 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-760922          | jenkins | v1.33.0 | 29 Apr 24 11:37 UTC | 29 Apr 24 11:38 UTC |
	|         | -p addons-760922                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-760922 ssh cat                                                                       | addons-760922          | jenkins | v1.33.0 | 29 Apr 24 11:38 UTC | 29 Apr 24 11:38 UTC |
	|         | /opt/local-path-provisioner/pvc-bb91cfb2-1bc0-483a-82bf-c8a42280a852_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-760922 addons disable                                                                | addons-760922          | jenkins | v1.33.0 | 29 Apr 24 11:38 UTC | 29 Apr 24 11:38 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-760922 addons                                                                        | addons-760922          | jenkins | v1.33.0 | 29 Apr 24 11:38 UTC | 29 Apr 24 11:38 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-760922          | jenkins | v1.33.0 | 29 Apr 24 11:38 UTC | 29 Apr 24 11:38 UTC |
	|         | addons-760922                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-760922          | jenkins | v1.33.0 | 29 Apr 24 11:38 UTC | 29 Apr 24 11:38 UTC |
	|         | -p addons-760922                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-760922 addons                                                                        | addons-760922          | jenkins | v1.33.0 | 29 Apr 24 11:38 UTC | 29 Apr 24 11:38 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-760922          | jenkins | v1.33.0 | 29 Apr 24 11:38 UTC | 29 Apr 24 11:38 UTC |
	|         | addons-760922                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-760922 ssh curl -s                                                                   | addons-760922          | jenkins | v1.33.0 | 29 Apr 24 11:38 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-760922 ip                                                                            | addons-760922          | jenkins | v1.33.0 | 29 Apr 24 11:41 UTC | 29 Apr 24 11:41 UTC |
	| addons  | addons-760922 addons disable                                                                | addons-760922          | jenkins | v1.33.0 | 29 Apr 24 11:41 UTC | 29 Apr 24 11:41 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-760922 addons disable                                                                | addons-760922          | jenkins | v1.33.0 | 29 Apr 24 11:41 UTC | 29 Apr 24 11:41 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-760922 addons                                                                        | addons-760922          | jenkins | v1.33.0 | 29 Apr 24 11:43 UTC | 29 Apr 24 11:43 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 11:34:04
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 11:34:04.652540 1237617 out.go:291] Setting OutFile to fd 1 ...
	I0429 11:34:04.652706 1237617 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 11:34:04.652716 1237617 out.go:304] Setting ErrFile to fd 2...
	I0429 11:34:04.652721 1237617 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 11:34:04.652955 1237617 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18756-1231546/.minikube/bin
	I0429 11:34:04.653407 1237617 out.go:298] Setting JSON to false
	I0429 11:34:04.654299 1237617 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":26189,"bootTime":1714364256,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0429 11:34:04.654373 1237617 start.go:139] virtualization:  
	I0429 11:34:04.657390 1237617 out.go:177] * [addons-760922] minikube v1.33.0 on Ubuntu 20.04 (arm64)
	I0429 11:34:04.661168 1237617 out.go:177]   - MINIKUBE_LOCATION=18756
	I0429 11:34:04.663249 1237617 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 11:34:04.661284 1237617 notify.go:220] Checking for updates...
	I0429 11:34:04.667216 1237617 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18756-1231546/kubeconfig
	I0429 11:34:04.669479 1237617 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18756-1231546/.minikube
	I0429 11:34:04.671454 1237617 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0429 11:34:04.674266 1237617 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 11:34:04.676642 1237617 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 11:34:04.699144 1237617 docker.go:122] docker version: linux-26.1.0:Docker Engine - Community
	I0429 11:34:04.699270 1237617 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0429 11:34:04.760727 1237617 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-04-29 11:34:04.75150628 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.1.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0429 11:34:04.760839 1237617 docker.go:295] overlay module found
	I0429 11:34:04.763013 1237617 out.go:177] * Using the docker driver based on user configuration
	I0429 11:34:04.764800 1237617 start.go:297] selected driver: docker
	I0429 11:34:04.764813 1237617 start.go:901] validating driver "docker" against <nil>
	I0429 11:34:04.764827 1237617 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 11:34:04.765472 1237617 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0429 11:34:04.816941 1237617 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-04-29 11:34:04.808138598 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.1.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0429 11:34:04.817110 1237617 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 11:34:04.817346 1237617 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 11:34:04.819030 1237617 out.go:177] * Using Docker driver with root privileges
	I0429 11:34:04.821064 1237617 cni.go:84] Creating CNI manager for ""
	I0429 11:34:04.821090 1237617 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0429 11:34:04.821099 1237617 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0429 11:34:04.821192 1237617 start.go:340] cluster config:
	{Name:addons-760922 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-760922 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 11:34:04.823276 1237617 out.go:177] * Starting "addons-760922" primary control-plane node in "addons-760922" cluster
	I0429 11:34:04.825298 1237617 cache.go:121] Beginning downloading kic base image for docker with crio
	I0429 11:34:04.827269 1237617 out.go:177] * Pulling base image v0.0.43-1713736339-18706 ...
	I0429 11:34:04.829413 1237617 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 11:34:04.829446 1237617 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon
	I0429 11:34:04.829474 1237617 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18756-1231546/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-arm64.tar.lz4
	I0429 11:34:04.829484 1237617 cache.go:56] Caching tarball of preloaded images
	I0429 11:34:04.829573 1237617 preload.go:173] Found /home/jenkins/minikube-integration/18756-1231546/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0429 11:34:04.829588 1237617 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0429 11:34:04.829947 1237617 profile.go:143] Saving config to /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/config.json ...
	I0429 11:34:04.829968 1237617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/config.json: {Name:mk8ff81118efc3ea2062fe7790d26ea20ad501d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:34:04.843100 1237617 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e to local cache
	I0429 11:34:04.843219 1237617 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local cache directory
	I0429 11:34:04.843245 1237617 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local cache directory, skipping pull
	I0429 11:34:04.843254 1237617 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e exists in cache, skipping pull
	I0429 11:34:04.843262 1237617 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e as a tarball
	I0429 11:34:04.843272 1237617 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e from local cache
	I0429 11:34:21.280204 1237617 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e from cached tarball
	I0429 11:34:21.280239 1237617 cache.go:194] Successfully downloaded all kic artifacts
	I0429 11:34:21.280268 1237617 start.go:360] acquireMachinesLock for addons-760922: {Name:mk795d68e2ddd6b7e26da53c29b36b6339fa2857 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 11:34:21.280388 1237617 start.go:364] duration metric: took 97.813µs to acquireMachinesLock for "addons-760922"
	I0429 11:34:21.280420 1237617 start.go:93] Provisioning new machine with config: &{Name:addons-760922 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-760922 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 11:34:21.280522 1237617 start.go:125] createHost starting for "" (driver="docker")
	I0429 11:34:21.282901 1237617 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0429 11:34:21.283138 1237617 start.go:159] libmachine.API.Create for "addons-760922" (driver="docker")
	I0429 11:34:21.283178 1237617 client.go:168] LocalClient.Create starting
	I0429 11:34:21.283318 1237617 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18756-1231546/.minikube/certs/ca.pem
	I0429 11:34:21.554507 1237617 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18756-1231546/.minikube/certs/cert.pem
	I0429 11:34:21.746515 1237617 cli_runner.go:164] Run: docker network inspect addons-760922 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0429 11:34:21.762560 1237617 cli_runner.go:211] docker network inspect addons-760922 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0429 11:34:21.762659 1237617 network_create.go:281] running [docker network inspect addons-760922] to gather additional debugging logs...
	I0429 11:34:21.762686 1237617 cli_runner.go:164] Run: docker network inspect addons-760922
	W0429 11:34:21.780341 1237617 cli_runner.go:211] docker network inspect addons-760922 returned with exit code 1
	I0429 11:34:21.780374 1237617 network_create.go:284] error running [docker network inspect addons-760922]: docker network inspect addons-760922: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-760922 not found
	I0429 11:34:21.780399 1237617 network_create.go:286] output of [docker network inspect addons-760922]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-760922 not found
	
	** /stderr **
	I0429 11:34:21.780493 1237617 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0429 11:34:21.797382 1237617 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400277eb60}
	I0429 11:34:21.797421 1237617 network_create.go:124] attempt to create docker network addons-760922 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0429 11:34:21.797477 1237617 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-760922 addons-760922
	I0429 11:34:21.861693 1237617 network_create.go:108] docker network addons-760922 192.168.49.0/24 created
	I0429 11:34:21.861725 1237617 kic.go:121] calculated static IP "192.168.49.2" for the "addons-760922" container
	I0429 11:34:21.861817 1237617 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0429 11:34:21.876333 1237617 cli_runner.go:164] Run: docker volume create addons-760922 --label name.minikube.sigs.k8s.io=addons-760922 --label created_by.minikube.sigs.k8s.io=true
	I0429 11:34:21.892395 1237617 oci.go:103] Successfully created a docker volume addons-760922
	I0429 11:34:21.892476 1237617 cli_runner.go:164] Run: docker run --rm --name addons-760922-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-760922 --entrypoint /usr/bin/test -v addons-760922:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -d /var/lib
	I0429 11:34:23.844245 1237617 cli_runner.go:217] Completed: docker run --rm --name addons-760922-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-760922 --entrypoint /usr/bin/test -v addons-760922:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -d /var/lib: (1.951720717s)
	I0429 11:34:23.844279 1237617 oci.go:107] Successfully prepared a docker volume addons-760922
	I0429 11:34:23.844306 1237617 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 11:34:23.844325 1237617 kic.go:194] Starting extracting preloaded images to volume ...
	I0429 11:34:23.844410 1237617 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18756-1231546/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-760922:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -I lz4 -xf /preloaded.tar -C /extractDir
	I0429 11:34:27.964526 1237617 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18756-1231546/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-760922:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -I lz4 -xf /preloaded.tar -C /extractDir: (4.120073957s)
	I0429 11:34:27.964562 1237617 kic.go:203] duration metric: took 4.120233883s to extract preloaded images to volume ...
	W0429 11:34:27.964716 1237617 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0429 11:34:27.964832 1237617 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0429 11:34:28.019282 1237617 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-760922 --name addons-760922 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-760922 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-760922 --network addons-760922 --ip 192.168.49.2 --volume addons-760922:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e
	I0429 11:34:28.336633 1237617 cli_runner.go:164] Run: docker container inspect addons-760922 --format={{.State.Running}}
	I0429 11:34:28.361288 1237617 cli_runner.go:164] Run: docker container inspect addons-760922 --format={{.State.Status}}
	I0429 11:34:28.384444 1237617 cli_runner.go:164] Run: docker exec addons-760922 stat /var/lib/dpkg/alternatives/iptables
	I0429 11:34:28.455814 1237617 oci.go:144] the created container "addons-760922" has a running status.
	I0429 11:34:28.455846 1237617 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18756-1231546/.minikube/machines/addons-760922/id_rsa...
	I0429 11:34:28.925821 1237617 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18756-1231546/.minikube/machines/addons-760922/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0429 11:34:28.955193 1237617 cli_runner.go:164] Run: docker container inspect addons-760922 --format={{.State.Status}}
	I0429 11:34:28.978252 1237617 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0429 11:34:28.978272 1237617 kic_runner.go:114] Args: [docker exec --privileged addons-760922 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0429 11:34:29.043716 1237617 cli_runner.go:164] Run: docker container inspect addons-760922 --format={{.State.Status}}
	I0429 11:34:29.069194 1237617 machine.go:94] provisionDockerMachine start ...
	I0429 11:34:29.069283 1237617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-760922
	I0429 11:34:29.091374 1237617 main.go:141] libmachine: Using SSH client type: native
	I0429 11:34:29.091636 1237617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 34278 <nil> <nil>}
	I0429 11:34:29.091645 1237617 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 11:34:29.240258 1237617 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-760922
	
	I0429 11:34:29.240321 1237617 ubuntu.go:169] provisioning hostname "addons-760922"
	I0429 11:34:29.240417 1237617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-760922
	I0429 11:34:29.257735 1237617 main.go:141] libmachine: Using SSH client type: native
	I0429 11:34:29.257972 1237617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 34278 <nil> <nil>}
	I0429 11:34:29.257983 1237617 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-760922 && echo "addons-760922" | sudo tee /etc/hostname
	I0429 11:34:29.407508 1237617 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-760922
	
	I0429 11:34:29.407667 1237617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-760922
	I0429 11:34:29.423856 1237617 main.go:141] libmachine: Using SSH client type: native
	I0429 11:34:29.424100 1237617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 34278 <nil> <nil>}
	I0429 11:34:29.424116 1237617 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-760922' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-760922/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-760922' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 11:34:29.548876 1237617 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 11:34:29.548902 1237617 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18756-1231546/.minikube CaCertPath:/home/jenkins/minikube-integration/18756-1231546/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18756-1231546/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18756-1231546/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18756-1231546/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18756-1231546/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18756-1231546/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18756-1231546/.minikube}
	I0429 11:34:29.548944 1237617 ubuntu.go:177] setting up certificates
	I0429 11:34:29.548959 1237617 provision.go:84] configureAuth start
	I0429 11:34:29.549025 1237617 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-760922
	I0429 11:34:29.565188 1237617 provision.go:143] copyHostCerts
	I0429 11:34:29.565272 1237617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18756-1231546/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18756-1231546/.minikube/ca.pem (1082 bytes)
	I0429 11:34:29.565398 1237617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18756-1231546/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18756-1231546/.minikube/cert.pem (1123 bytes)
	I0429 11:34:29.565468 1237617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18756-1231546/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18756-1231546/.minikube/key.pem (1675 bytes)
	I0429 11:34:29.565524 1237617 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18756-1231546/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18756-1231546/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18756-1231546/.minikube/certs/ca-key.pem org=jenkins.addons-760922 san=[127.0.0.1 192.168.49.2 addons-760922 localhost minikube]
	I0429 11:34:30.051555 1237617 provision.go:177] copyRemoteCerts
	I0429 11:34:30.051632 1237617 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 11:34:30.051678 1237617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-760922
	I0429 11:34:30.072945 1237617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34278 SSHKeyPath:/home/jenkins/minikube-integration/18756-1231546/.minikube/machines/addons-760922/id_rsa Username:docker}
	I0429 11:34:30.166306 1237617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18756-1231546/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0429 11:34:30.193667 1237617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18756-1231546/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0429 11:34:30.220123 1237617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18756-1231546/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0429 11:34:30.244870 1237617 provision.go:87] duration metric: took 695.894789ms to configureAuth
	I0429 11:34:30.244896 1237617 ubuntu.go:193] setting minikube options for container-runtime
	I0429 11:34:30.245115 1237617 config.go:182] Loaded profile config "addons-760922": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 11:34:30.245238 1237617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-760922
	I0429 11:34:30.263882 1237617 main.go:141] libmachine: Using SSH client type: native
	I0429 11:34:30.264129 1237617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 34278 <nil> <nil>}
	I0429 11:34:30.264149 1237617 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 11:34:30.485874 1237617 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 11:34:30.485937 1237617 machine.go:97] duration metric: took 1.416722728s to provisionDockerMachine
	I0429 11:34:30.485962 1237617 client.go:171] duration metric: took 9.202772221s to LocalClient.Create
	I0429 11:34:30.486010 1237617 start.go:167] duration metric: took 9.202860705s to libmachine.API.Create "addons-760922"
	I0429 11:34:30.486036 1237617 start.go:293] postStartSetup for "addons-760922" (driver="docker")
	I0429 11:34:30.486059 1237617 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 11:34:30.486170 1237617 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 11:34:30.486258 1237617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-760922
	I0429 11:34:30.507746 1237617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34278 SSHKeyPath:/home/jenkins/minikube-integration/18756-1231546/.minikube/machines/addons-760922/id_rsa Username:docker}
	I0429 11:34:30.598372 1237617 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 11:34:30.601690 1237617 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0429 11:34:30.601726 1237617 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0429 11:34:30.601737 1237617 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0429 11:34:30.601744 1237617 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0429 11:34:30.601761 1237617 filesync.go:126] Scanning /home/jenkins/minikube-integration/18756-1231546/.minikube/addons for local assets ...
	I0429 11:34:30.601835 1237617 filesync.go:126] Scanning /home/jenkins/minikube-integration/18756-1231546/.minikube/files for local assets ...
	I0429 11:34:30.601860 1237617 start.go:296] duration metric: took 115.806029ms for postStartSetup
	I0429 11:34:30.602185 1237617 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-760922
	I0429 11:34:30.618381 1237617 profile.go:143] Saving config to /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/config.json ...
	I0429 11:34:30.618686 1237617 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 11:34:30.618749 1237617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-760922
	I0429 11:34:30.635417 1237617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34278 SSHKeyPath:/home/jenkins/minikube-integration/18756-1231546/.minikube/machines/addons-760922/id_rsa Username:docker}
	I0429 11:34:30.721528 1237617 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0429 11:34:30.726011 1237617 start.go:128] duration metric: took 9.445474322s to createHost
	I0429 11:34:30.726035 1237617 start.go:83] releasing machines lock for "addons-760922", held for 9.445632763s
	I0429 11:34:30.726117 1237617 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-760922
	I0429 11:34:30.741572 1237617 ssh_runner.go:195] Run: cat /version.json
	I0429 11:34:30.741615 1237617 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 11:34:30.741624 1237617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-760922
	I0429 11:34:30.741666 1237617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-760922
	I0429 11:34:30.758178 1237617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34278 SSHKeyPath:/home/jenkins/minikube-integration/18756-1231546/.minikube/machines/addons-760922/id_rsa Username:docker}
	I0429 11:34:30.768890 1237617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34278 SSHKeyPath:/home/jenkins/minikube-integration/18756-1231546/.minikube/machines/addons-760922/id_rsa Username:docker}
	I0429 11:34:30.844164 1237617 ssh_runner.go:195] Run: systemctl --version
	I0429 11:34:30.958780 1237617 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 11:34:31.099197 1237617 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0429 11:34:31.103700 1237617 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 11:34:31.125572 1237617 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0429 11:34:31.125669 1237617 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 11:34:31.162093 1237617 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0429 11:34:31.162114 1237617 start.go:494] detecting cgroup driver to use...
	I0429 11:34:31.162147 1237617 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0429 11:34:31.162193 1237617 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 11:34:31.179624 1237617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 11:34:31.191562 1237617 docker.go:217] disabling cri-docker service (if available) ...
	I0429 11:34:31.191622 1237617 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 11:34:31.206909 1237617 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 11:34:31.222116 1237617 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 11:34:31.312293 1237617 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 11:34:31.400129 1237617 docker.go:233] disabling docker service ...
	I0429 11:34:31.400196 1237617 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 11:34:31.420939 1237617 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 11:34:31.432411 1237617 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 11:34:31.523722 1237617 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 11:34:31.624581 1237617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 11:34:31.637057 1237617 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 11:34:31.653854 1237617 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0429 11:34:31.653921 1237617 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 11:34:31.663733 1237617 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 11:34:31.663808 1237617 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 11:34:31.674262 1237617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 11:34:31.684801 1237617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 11:34:31.695750 1237617 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 11:34:31.704629 1237617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 11:34:31.714536 1237617 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 11:34:31.730044 1237617 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 11:34:31.739621 1237617 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 11:34:31.748064 1237617 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 11:34:31.756880 1237617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 11:34:31.837643 1237617 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 11:34:31.960951 1237617 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 11:34:31.961087 1237617 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 11:34:31.964456 1237617 start.go:562] Will wait 60s for crictl version
	I0429 11:34:31.964526 1237617 ssh_runner.go:195] Run: which crictl
	I0429 11:34:31.967870 1237617 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 11:34:32.012494 1237617 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0429 11:34:32.012648 1237617 ssh_runner.go:195] Run: crio --version
	I0429 11:34:32.056226 1237617 ssh_runner.go:195] Run: crio --version
	I0429 11:34:32.102370 1237617 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.24.6 ...
	I0429 11:34:32.104264 1237617 cli_runner.go:164] Run: docker network inspect addons-760922 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0429 11:34:32.119339 1237617 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0429 11:34:32.123001 1237617 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 11:34:32.133991 1237617 kubeadm.go:877] updating cluster {Name:addons-760922 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-760922 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 11:34:32.134115 1237617 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 11:34:32.134172 1237617 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 11:34:32.214515 1237617 crio.go:514] all images are preloaded for cri-o runtime.
	I0429 11:34:32.214537 1237617 crio.go:433] Images already preloaded, skipping extraction
	I0429 11:34:32.214593 1237617 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 11:34:32.254574 1237617 crio.go:514] all images are preloaded for cri-o runtime.
	I0429 11:34:32.254596 1237617 cache_images.go:84] Images are preloaded, skipping loading
	I0429 11:34:32.254606 1237617 kubeadm.go:928] updating node { 192.168.49.2 8443 v1.30.0 crio true true} ...
	I0429 11:34:32.254696 1237617 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-760922 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:addons-760922 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 11:34:32.254785 1237617 ssh_runner.go:195] Run: crio config
	I0429 11:34:32.309210 1237617 cni.go:84] Creating CNI manager for ""
	I0429 11:34:32.309239 1237617 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0429 11:34:32.309260 1237617 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 11:34:32.309284 1237617 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-760922 NodeName:addons-760922 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 11:34:32.309435 1237617 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-760922"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 11:34:32.309511 1237617 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 11:34:32.318363 1237617 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 11:34:32.318453 1237617 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0429 11:34:32.327123 1237617 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0429 11:34:32.344858 1237617 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 11:34:32.362998 1237617 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0429 11:34:32.381412 1237617 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0429 11:34:32.384769 1237617 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 11:34:32.395516 1237617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 11:34:32.474980 1237617 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 11:34:32.489258 1237617 certs.go:68] Setting up /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922 for IP: 192.168.49.2
	I0429 11:34:32.489323 1237617 certs.go:194] generating shared ca certs ...
	I0429 11:34:32.489353 1237617 certs.go:226] acquiring lock for ca certs: {Name:mkcd7972b318778b7d6fba570abab6a01a410b21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:34:32.489937 1237617 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18756-1231546/.minikube/ca.key
	I0429 11:34:32.998001 1237617 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18756-1231546/.minikube/ca.crt ...
	I0429 11:34:32.998036 1237617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18756-1231546/.minikube/ca.crt: {Name:mkb55926e354b45a8c55ecd39aada1a07cffe5eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:34:32.998764 1237617 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18756-1231546/.minikube/ca.key ...
	I0429 11:34:32.998785 1237617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18756-1231546/.minikube/ca.key: {Name:mk8a44ed64694b47d09bdbf0fe8c051b92db4b05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:34:32.999278 1237617 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18756-1231546/.minikube/proxy-client-ca.key
	I0429 11:34:33.666284 1237617 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18756-1231546/.minikube/proxy-client-ca.crt ...
	I0429 11:34:33.666321 1237617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18756-1231546/.minikube/proxy-client-ca.crt: {Name:mk0b17e32528870a0304f5efb5bd105bfe4ea76f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:34:33.667942 1237617 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18756-1231546/.minikube/proxy-client-ca.key ...
	I0429 11:34:33.667963 1237617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18756-1231546/.minikube/proxy-client-ca.key: {Name:mk2232e658101caf3170828b2d9085d74040565c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:34:33.668066 1237617 certs.go:256] generating profile certs ...
	I0429 11:34:33.668137 1237617 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/client.key
	I0429 11:34:33.668156 1237617 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/client.crt with IP's: []
	I0429 11:34:34.115413 1237617 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/client.crt ...
	I0429 11:34:34.115443 1237617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/client.crt: {Name:mkaf430a8e01e9f887def27f4fea1ff97047a47a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:34:34.116233 1237617 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/client.key ...
	I0429 11:34:34.116248 1237617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/client.key: {Name:mkf0c79492aed487569621f8e1d1da25488184a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:34:34.116974 1237617 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/apiserver.key.9e4bbe06
	I0429 11:34:34.117026 1237617 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/apiserver.crt.9e4bbe06 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0429 11:34:34.514926 1237617 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/apiserver.crt.9e4bbe06 ...
	I0429 11:34:34.514961 1237617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/apiserver.crt.9e4bbe06: {Name:mkcc54edb63d4378732201e25d52f4dc767bf62f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:34:34.515797 1237617 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/apiserver.key.9e4bbe06 ...
	I0429 11:34:34.515820 1237617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/apiserver.key.9e4bbe06: {Name:mk7f67c28d85bf9bd0e58476f231878c3993570e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:34:34.515923 1237617 certs.go:381] copying /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/apiserver.crt.9e4bbe06 -> /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/apiserver.crt
	I0429 11:34:34.516012 1237617 certs.go:385] copying /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/apiserver.key.9e4bbe06 -> /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/apiserver.key
	I0429 11:34:34.516076 1237617 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/proxy-client.key
	I0429 11:34:34.516100 1237617 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/proxy-client.crt with IP's: []
	I0429 11:34:34.778633 1237617 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/proxy-client.crt ...
	I0429 11:34:34.778665 1237617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/proxy-client.crt: {Name:mka241b7d61c28da857e5d409dc54c02b4d839d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:34:34.779397 1237617 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/proxy-client.key ...
	I0429 11:34:34.779415 1237617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/proxy-client.key: {Name:mk20dedc655d9364df65b3d45460fa539e0ebbf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:34:34.779643 1237617 certs.go:484] found cert: /home/jenkins/minikube-integration/18756-1231546/.minikube/certs/ca-key.pem (1679 bytes)
	I0429 11:34:34.779685 1237617 certs.go:484] found cert: /home/jenkins/minikube-integration/18756-1231546/.minikube/certs/ca.pem (1082 bytes)
	I0429 11:34:34.779715 1237617 certs.go:484] found cert: /home/jenkins/minikube-integration/18756-1231546/.minikube/certs/cert.pem (1123 bytes)
	I0429 11:34:34.779742 1237617 certs.go:484] found cert: /home/jenkins/minikube-integration/18756-1231546/.minikube/certs/key.pem (1675 bytes)
	I0429 11:34:34.780352 1237617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18756-1231546/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 11:34:34.807477 1237617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18756-1231546/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0429 11:34:34.832809 1237617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18756-1231546/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 11:34:34.857752 1237617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18756-1231546/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 11:34:34.883588 1237617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0429 11:34:34.909306 1237617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0429 11:34:34.933531 1237617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 11:34:34.957908 1237617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0429 11:34:34.982334 1237617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18756-1231546/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 11:34:35.008950 1237617 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 11:34:35.029900 1237617 ssh_runner.go:195] Run: openssl version
	I0429 11:34:35.036021 1237617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 11:34:35.046023 1237617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 11:34:35.049667 1237617 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 11:34 /usr/share/ca-certificates/minikubeCA.pem
	I0429 11:34:35.049780 1237617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 11:34:35.056537 1237617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 11:34:35.066224 1237617 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 11:34:35.069528 1237617 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0429 11:34:35.069595 1237617 kubeadm.go:391] StartCluster: {Name:addons-760922 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-760922 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 11:34:35.069686 1237617 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0429 11:34:35.069745 1237617 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 11:34:35.108603 1237617 cri.go:89] found id: ""
	I0429 11:34:35.108698 1237617 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0429 11:34:35.118054 1237617 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 11:34:35.127520 1237617 kubeadm.go:213] ignoring SystemVerification for kubeadm because of docker driver
	I0429 11:34:35.127619 1237617 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 11:34:35.136788 1237617 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 11:34:35.136821 1237617 kubeadm.go:156] found existing configuration files:
	
	I0429 11:34:35.136929 1237617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 11:34:35.146033 1237617 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 11:34:35.146103 1237617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 11:34:35.155261 1237617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 11:34:35.165524 1237617 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 11:34:35.165623 1237617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 11:34:35.174371 1237617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 11:34:35.183423 1237617 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 11:34:35.183489 1237617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 11:34:35.192319 1237617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 11:34:35.201475 1237617 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 11:34:35.201537 1237617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 11:34:35.209990 1237617 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0429 11:34:35.255974 1237617 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0429 11:34:35.256036 1237617 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 11:34:35.298864 1237617 kubeadm.go:309] [preflight] The system verification failed. Printing the output from the verification:
	I0429 11:34:35.298940 1237617 kubeadm.go:309] KERNEL_VERSION: 5.15.0-1058-aws
	I0429 11:34:35.298982 1237617 kubeadm.go:309] OS: Linux
	I0429 11:34:35.299032 1237617 kubeadm.go:309] CGROUPS_CPU: enabled
	I0429 11:34:35.299101 1237617 kubeadm.go:309] CGROUPS_CPUACCT: enabled
	I0429 11:34:35.299154 1237617 kubeadm.go:309] CGROUPS_CPUSET: enabled
	I0429 11:34:35.299205 1237617 kubeadm.go:309] CGROUPS_DEVICES: enabled
	I0429 11:34:35.299256 1237617 kubeadm.go:309] CGROUPS_FREEZER: enabled
	I0429 11:34:35.299307 1237617 kubeadm.go:309] CGROUPS_MEMORY: enabled
	I0429 11:34:35.299358 1237617 kubeadm.go:309] CGROUPS_PIDS: enabled
	I0429 11:34:35.299409 1237617 kubeadm.go:309] CGROUPS_HUGETLB: enabled
	I0429 11:34:35.299457 1237617 kubeadm.go:309] CGROUPS_BLKIO: enabled
	I0429 11:34:35.373866 1237617 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 11:34:35.373982 1237617 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 11:34:35.374079 1237617 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 11:34:35.617262 1237617 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 11:34:35.621108 1237617 out.go:204]   - Generating certificates and keys ...
	I0429 11:34:35.621292 1237617 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 11:34:35.621394 1237617 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 11:34:35.863755 1237617 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0429 11:34:36.145894 1237617 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0429 11:34:36.772519 1237617 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0429 11:34:37.389059 1237617 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0429 11:34:37.904059 1237617 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0429 11:34:37.904515 1237617 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-760922 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0429 11:34:38.437819 1237617 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0429 11:34:38.438127 1237617 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-760922 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0429 11:34:39.028427 1237617 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0429 11:34:39.432989 1237617 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0429 11:34:39.687561 1237617 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0429 11:34:39.687799 1237617 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 11:34:39.970112 1237617 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 11:34:40.200280 1237617 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0429 11:34:40.917641 1237617 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 11:34:41.704991 1237617 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 11:34:42.461669 1237617 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 11:34:42.462761 1237617 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 11:34:42.472253 1237617 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 11:34:42.474310 1237617 out.go:204]   - Booting up control plane ...
	I0429 11:34:42.474408 1237617 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 11:34:42.474484 1237617 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 11:34:42.475127 1237617 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 11:34:42.486375 1237617 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 11:34:42.487421 1237617 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 11:34:42.487647 1237617 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 11:34:42.582536 1237617 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0429 11:34:42.582623 1237617 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0429 11:34:44.584125 1237617 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 2.001461006s
	I0429 11:34:44.584229 1237617 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0429 11:34:50.586103 1237617 kubeadm.go:309] [api-check] The API server is healthy after 6.002193018s
	I0429 11:34:50.605271 1237617 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0429 11:34:50.623113 1237617 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0429 11:34:50.645190 1237617 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0429 11:34:50.645420 1237617 kubeadm.go:309] [mark-control-plane] Marking the node addons-760922 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0429 11:34:50.660126 1237617 kubeadm.go:309] [bootstrap-token] Using token: niags0.uqyndtemqqmk9gvx
	I0429 11:34:50.662507 1237617 out.go:204]   - Configuring RBAC rules ...
	I0429 11:34:50.662649 1237617 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0429 11:34:50.667330 1237617 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0429 11:34:50.675032 1237617 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0429 11:34:50.678549 1237617 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0429 11:34:50.684076 1237617 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0429 11:34:50.688178 1237617 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0429 11:34:50.992818 1237617 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0429 11:34:51.450680 1237617 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0429 11:34:51.992103 1237617 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0429 11:34:51.993396 1237617 kubeadm.go:309] 
	I0429 11:34:51.993472 1237617 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0429 11:34:51.993485 1237617 kubeadm.go:309] 
	I0429 11:34:51.993560 1237617 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0429 11:34:51.993569 1237617 kubeadm.go:309] 
	I0429 11:34:51.993595 1237617 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0429 11:34:51.993655 1237617 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0429 11:34:51.993710 1237617 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0429 11:34:51.993719 1237617 kubeadm.go:309] 
	I0429 11:34:51.993778 1237617 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0429 11:34:51.993787 1237617 kubeadm.go:309] 
	I0429 11:34:51.993833 1237617 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0429 11:34:51.993842 1237617 kubeadm.go:309] 
	I0429 11:34:51.993892 1237617 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0429 11:34:51.993967 1237617 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0429 11:34:51.994036 1237617 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0429 11:34:51.994044 1237617 kubeadm.go:309] 
	I0429 11:34:51.994126 1237617 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0429 11:34:51.994203 1237617 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0429 11:34:51.994210 1237617 kubeadm.go:309] 
	I0429 11:34:51.994290 1237617 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token niags0.uqyndtemqqmk9gvx \
	I0429 11:34:51.994392 1237617 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:76846a2c6b2d6c4faa2ca5b730d7f0eab7128ed63e643e5b107de948d1d74ce5 \
	I0429 11:34:51.994415 1237617 kubeadm.go:309] 	--control-plane 
	I0429 11:34:51.994428 1237617 kubeadm.go:309] 
	I0429 11:34:51.994512 1237617 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0429 11:34:51.994521 1237617 kubeadm.go:309] 
	I0429 11:34:51.994599 1237617 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token niags0.uqyndtemqqmk9gvx \
	I0429 11:34:51.994700 1237617 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:76846a2c6b2d6c4faa2ca5b730d7f0eab7128ed63e643e5b107de948d1d74ce5 
	I0429 11:34:51.997859 1237617 kubeadm.go:309] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1058-aws\n", err: exit status 1
	I0429 11:34:51.997979 1237617 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 11:34:51.997999 1237617 cni.go:84] Creating CNI manager for ""
	I0429 11:34:51.998011 1237617 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0429 11:34:51.999955 1237617 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0429 11:34:52.002580 1237617 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0429 11:34:52.008870 1237617 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0429 11:34:52.008896 1237617 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0429 11:34:52.029513 1237617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0429 11:34:52.333444 1237617 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0429 11:34:52.333512 1237617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:34:52.333664 1237617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-760922 minikube.k8s.io/updated_at=2024_04_29T11_34_52_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=8dbdf0965c47dd73171cfc89e1a9c75505f7a22d minikube.k8s.io/name=addons-760922 minikube.k8s.io/primary=true
	I0429 11:34:52.513796 1237617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:34:52.513854 1237617 ops.go:34] apiserver oom_adj: -16
	I0429 11:34:53.014079 1237617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:34:53.514383 1237617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:34:54.014482 1237617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:34:54.514459 1237617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:34:55.014031 1237617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:34:55.514460 1237617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:34:56.014460 1237617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:34:56.514713 1237617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:34:57.013947 1237617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:34:57.514528 1237617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:34:58.014864 1237617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:34:58.514796 1237617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:34:59.014612 1237617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:34:59.514309 1237617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:35:00.018196 1237617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:35:00.514807 1237617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:35:01.013956 1237617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:35:01.514391 1237617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:35:02.014498 1237617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:35:02.514228 1237617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:35:03.013974 1237617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:35:03.513915 1237617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:35:04.014166 1237617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:35:04.514195 1237617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:35:05.014664 1237617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:35:05.513940 1237617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 11:35:05.639492 1237617 kubeadm.go:1107] duration metric: took 13.306043821s to wait for elevateKubeSystemPrivileges
	W0429 11:35:05.639524 1237617 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0429 11:35:05.639531 1237617 kubeadm.go:393] duration metric: took 30.569958108s to StartCluster
	I0429 11:35:05.639545 1237617 settings.go:142] acquiring lock: {Name:mk0ef22430695db96615335cd2f3ba564b8d0f0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:35:05.640148 1237617 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18756-1231546/kubeconfig
	I0429 11:35:05.640527 1237617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18756-1231546/kubeconfig: {Name:mk3a783043373f26fbcf8c9fca1b15742ae22d84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 11:35:05.641143 1237617 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 11:35:05.643291 1237617 out.go:177] * Verifying Kubernetes components...
	I0429 11:35:05.641232 1237617 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0429 11:35:05.641414 1237617 config.go:182] Loaded profile config "addons-760922": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 11:35:05.641422 1237617 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0429 11:35:05.645473 1237617 addons.go:69] Setting yakd=true in profile "addons-760922"
	I0429 11:35:05.645499 1237617 addons.go:234] Setting addon yakd=true in "addons-760922"
	I0429 11:35:05.645528 1237617 host.go:66] Checking if "addons-760922" exists ...
	I0429 11:35:05.646021 1237617 cli_runner.go:164] Run: docker container inspect addons-760922 --format={{.State.Status}}
	I0429 11:35:05.646125 1237617 addons.go:69] Setting ingress=true in profile "addons-760922"
	I0429 11:35:05.646153 1237617 addons.go:234] Setting addon ingress=true in "addons-760922"
	I0429 11:35:05.646192 1237617 host.go:66] Checking if "addons-760922" exists ...
	I0429 11:35:05.646549 1237617 cli_runner.go:164] Run: docker container inspect addons-760922 --format={{.State.Status}}
	I0429 11:35:05.646990 1237617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 11:35:05.647146 1237617 addons.go:69] Setting cloud-spanner=true in profile "addons-760922"
	I0429 11:35:05.647164 1237617 addons.go:234] Setting addon cloud-spanner=true in "addons-760922"
	I0429 11:35:05.647184 1237617 host.go:66] Checking if "addons-760922" exists ...
	I0429 11:35:05.647532 1237617 cli_runner.go:164] Run: docker container inspect addons-760922 --format={{.State.Status}}
	I0429 11:35:05.647836 1237617 addons.go:69] Setting ingress-dns=true in profile "addons-760922"
	I0429 11:35:05.647859 1237617 addons.go:234] Setting addon ingress-dns=true in "addons-760922"
	I0429 11:35:05.647895 1237617 host.go:66] Checking if "addons-760922" exists ...
	I0429 11:35:05.648269 1237617 cli_runner.go:164] Run: docker container inspect addons-760922 --format={{.State.Status}}
	I0429 11:35:05.650422 1237617 addons.go:69] Setting inspektor-gadget=true in profile "addons-760922"
	I0429 11:35:05.650453 1237617 addons.go:234] Setting addon inspektor-gadget=true in "addons-760922"
	I0429 11:35:05.650477 1237617 host.go:66] Checking if "addons-760922" exists ...
	I0429 11:35:05.650853 1237617 cli_runner.go:164] Run: docker container inspect addons-760922 --format={{.State.Status}}
	I0429 11:35:05.664756 1237617 addons.go:69] Setting metrics-server=true in profile "addons-760922"
	I0429 11:35:05.664808 1237617 addons.go:234] Setting addon metrics-server=true in "addons-760922"
	I0429 11:35:05.664846 1237617 host.go:66] Checking if "addons-760922" exists ...
	I0429 11:35:05.665296 1237617 cli_runner.go:164] Run: docker container inspect addons-760922 --format={{.State.Status}}
	I0429 11:35:05.665569 1237617 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-760922"
	I0429 11:35:05.665738 1237617 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-760922"
	I0429 11:35:05.665828 1237617 host.go:66] Checking if "addons-760922" exists ...
	I0429 11:35:05.675245 1237617 cli_runner.go:164] Run: docker container inspect addons-760922 --format={{.State.Status}}
	I0429 11:35:05.665896 1237617 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-760922"
	I0429 11:35:05.705384 1237617 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-760922"
	I0429 11:35:05.705452 1237617 host.go:66] Checking if "addons-760922" exists ...
	I0429 11:35:05.705917 1237617 cli_runner.go:164] Run: docker container inspect addons-760922 --format={{.State.Status}}
	I0429 11:35:05.665902 1237617 addons.go:69] Setting registry=true in profile "addons-760922"
	I0429 11:35:05.665906 1237617 addons.go:69] Setting storage-provisioner=true in profile "addons-760922"
	I0429 11:35:05.665910 1237617 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-760922"
	I0429 11:35:05.665916 1237617 addons.go:69] Setting volumesnapshots=true in profile "addons-760922"
	I0429 11:35:05.671332 1237617 addons.go:69] Setting default-storageclass=true in profile "addons-760922"
	I0429 11:35:05.671348 1237617 addons.go:69] Setting gcp-auth=true in profile "addons-760922"
	I0429 11:35:05.727349 1237617 mustload.go:65] Loading cluster: addons-760922
	I0429 11:35:05.727524 1237617 config.go:182] Loaded profile config "addons-760922": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 11:35:05.727767 1237617 cli_runner.go:164] Run: docker container inspect addons-760922 --format={{.State.Status}}
	I0429 11:35:05.742410 1237617 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	I0429 11:35:05.745795 1237617 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0429 11:35:05.747872 1237617 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0429 11:35:05.753750 1237617 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0429 11:35:05.753771 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0429 11:35:05.753837 1237617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-760922
	I0429 11:35:05.761142 1237617 addons.go:234] Setting addon registry=true in "addons-760922"
	I0429 11:35:05.761194 1237617 host.go:66] Checking if "addons-760922" exists ...
	I0429 11:35:05.761628 1237617 cli_runner.go:164] Run: docker container inspect addons-760922 --format={{.State.Status}}
	I0429 11:35:05.786162 1237617 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0429 11:35:05.788860 1237617 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0429 11:35:05.788924 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0429 11:35:05.789037 1237617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-760922
	I0429 11:35:05.777403 1237617 addons.go:234] Setting addon storage-provisioner=true in "addons-760922"
	I0429 11:35:05.796921 1237617 host.go:66] Checking if "addons-760922" exists ...
	I0429 11:35:05.797388 1237617 cli_runner.go:164] Run: docker container inspect addons-760922 --format={{.State.Status}}
	I0429 11:35:05.777424 1237617 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-760922"
	I0429 11:35:05.810166 1237617 cli_runner.go:164] Run: docker container inspect addons-760922 --format={{.State.Status}}
	I0429 11:35:05.823435 1237617 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.27.0
	I0429 11:35:05.777447 1237617 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-760922"
	I0429 11:35:05.777436 1237617 addons.go:234] Setting addon volumesnapshots=true in "addons-760922"
	I0429 11:35:05.827897 1237617 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0429 11:35:05.827917 1237617 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0429 11:35:05.828219 1237617 cli_runner.go:164] Run: docker container inspect addons-760922 --format={{.State.Status}}
	I0429 11:35:05.832896 1237617 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.15
	I0429 11:35:05.834916 1237617 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0429 11:35:05.834936 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0429 11:35:05.835002 1237617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-760922
	I0429 11:35:05.833081 1237617 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0429 11:35:05.833136 1237617 host.go:66] Checking if "addons-760922" exists ...
	I0429 11:35:05.833149 1237617 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0429 11:35:05.833161 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0429 11:35:05.852770 1237617 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0429 11:35:05.858206 1237617 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0429 11:35:05.870428 1237617 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0429 11:35:05.868613 1237617 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	I0429 11:35:05.869092 1237617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-760922
	I0429 11:35:05.869398 1237617 cli_runner.go:164] Run: docker container inspect addons-760922 --format={{.State.Status}}
	I0429 11:35:05.870174 1237617 host.go:66] Checking if "addons-760922" exists ...
	I0429 11:35:05.889666 1237617 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0429 11:35:05.889727 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0429 11:35:05.889791 1237617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-760922
	I0429 11:35:05.901269 1237617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34278 SSHKeyPath:/home/jenkins/minikube-integration/18756-1231546/.minikube/machines/addons-760922/id_rsa Username:docker}
	I0429 11:35:05.901632 1237617 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0429 11:35:05.904782 1237617 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0429 11:35:05.910354 1237617 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0429 11:35:05.919070 1237617 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0429 11:35:05.906896 1237617 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0429 11:35:05.889695 1237617 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0429 11:35:05.928569 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0429 11:35:05.928751 1237617 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0429 11:35:05.928768 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0429 11:35:05.928800 1237617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-760922
	I0429 11:35:05.928944 1237617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-760922
	I0429 11:35:05.938835 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0429 11:35:05.938931 1237617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-760922
	I0429 11:35:05.962134 1237617 out.go:177]   - Using image docker.io/registry:2.8.3
	I0429 11:35:05.966136 1237617 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0429 11:35:05.964892 1237617 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-760922"
	I0429 11:35:05.967956 1237617 host.go:66] Checking if "addons-760922" exists ...
	I0429 11:35:05.968469 1237617 cli_runner.go:164] Run: docker container inspect addons-760922 --format={{.State.Status}}
	I0429 11:35:05.991056 1237617 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 11:35:05.989142 1237617 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0429 11:35:05.993242 1237617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34278 SSHKeyPath:/home/jenkins/minikube-integration/18756-1231546/.minikube/machines/addons-760922/id_rsa Username:docker}
	I0429 11:35:05.994775 1237617 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 11:35:05.994826 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0429 11:35:05.994914 1237617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-760922
	I0429 11:35:06.008952 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0429 11:35:06.009155 1237617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-760922
	I0429 11:35:06.027861 1237617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34278 SSHKeyPath:/home/jenkins/minikube-integration/18756-1231546/.minikube/machines/addons-760922/id_rsa Username:docker}
	I0429 11:35:06.032992 1237617 addons.go:234] Setting addon default-storageclass=true in "addons-760922"
	I0429 11:35:06.033033 1237617 host.go:66] Checking if "addons-760922" exists ...
	I0429 11:35:06.033749 1237617 cli_runner.go:164] Run: docker container inspect addons-760922 --format={{.State.Status}}
	I0429 11:35:06.063032 1237617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34278 SSHKeyPath:/home/jenkins/minikube-integration/18756-1231546/.minikube/machines/addons-760922/id_rsa Username:docker}
	I0429 11:35:06.090261 1237617 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0429 11:35:06.092021 1237617 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0429 11:35:06.092051 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0429 11:35:06.092120 1237617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-760922
	I0429 11:35:06.182815 1237617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34278 SSHKeyPath:/home/jenkins/minikube-integration/18756-1231546/.minikube/machines/addons-760922/id_rsa Username:docker}
	I0429 11:35:06.182900 1237617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0429 11:35:06.183267 1237617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34278 SSHKeyPath:/home/jenkins/minikube-integration/18756-1231546/.minikube/machines/addons-760922/id_rsa Username:docker}
	I0429 11:35:06.201075 1237617 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0429 11:35:06.197804 1237617 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0429 11:35:06.197861 1237617 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 11:35:06.209731 1237617 out.go:177]   - Using image docker.io/busybox:stable
	I0429 11:35:06.213599 1237617 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0429 11:35:06.213666 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0429 11:35:06.213753 1237617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-760922
	I0429 11:35:06.219440 1237617 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0429 11:35:06.219503 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0429 11:35:06.219582 1237617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-760922
	I0429 11:35:06.209494 1237617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34278 SSHKeyPath:/home/jenkins/minikube-integration/18756-1231546/.minikube/machines/addons-760922/id_rsa Username:docker}
	I0429 11:35:06.209664 1237617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34278 SSHKeyPath:/home/jenkins/minikube-integration/18756-1231546/.minikube/machines/addons-760922/id_rsa Username:docker}
	I0429 11:35:06.212311 1237617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34278 SSHKeyPath:/home/jenkins/minikube-integration/18756-1231546/.minikube/machines/addons-760922/id_rsa Username:docker}
	I0429 11:35:06.225290 1237617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34278 SSHKeyPath:/home/jenkins/minikube-integration/18756-1231546/.minikube/machines/addons-760922/id_rsa Username:docker}
	I0429 11:35:06.231301 1237617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34278 SSHKeyPath:/home/jenkins/minikube-integration/18756-1231546/.minikube/machines/addons-760922/id_rsa Username:docker}
	I0429 11:35:06.263628 1237617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34278 SSHKeyPath:/home/jenkins/minikube-integration/18756-1231546/.minikube/machines/addons-760922/id_rsa Username:docker}
	I0429 11:35:06.264376 1237617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34278 SSHKeyPath:/home/jenkins/minikube-integration/18756-1231546/.minikube/machines/addons-760922/id_rsa Username:docker}
	I0429 11:35:06.337457 1237617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0429 11:35:06.424087 1237617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0429 11:35:06.539124 1237617 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0429 11:35:06.539190 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0429 11:35:06.699280 1237617 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0429 11:35:06.699351 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0429 11:35:06.707349 1237617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0429 11:35:06.761601 1237617 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0429 11:35:06.761673 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0429 11:35:06.777229 1237617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0429 11:35:06.784634 1237617 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0429 11:35:06.784778 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0429 11:35:06.803750 1237617 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0429 11:35:06.803826 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0429 11:35:06.826840 1237617 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0429 11:35:06.826911 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0429 11:35:06.829376 1237617 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0429 11:35:06.829444 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0429 11:35:06.831984 1237617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 11:35:06.849598 1237617 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0429 11:35:06.849672 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0429 11:35:06.857731 1237617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0429 11:35:06.899127 1237617 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0429 11:35:06.899199 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0429 11:35:06.936297 1237617 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0429 11:35:06.936371 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0429 11:35:06.974947 1237617 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0429 11:35:06.975022 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0429 11:35:06.997649 1237617 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0429 11:35:06.997722 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0429 11:35:07.016732 1237617 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0429 11:35:07.016807 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0429 11:35:07.036494 1237617 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0429 11:35:07.036568 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0429 11:35:07.077873 1237617 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0429 11:35:07.077945 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0429 11:35:07.126908 1237617 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0429 11:35:07.126980 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0429 11:35:07.155867 1237617 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0429 11:35:07.155942 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0429 11:35:07.161066 1237617 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0429 11:35:07.161147 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0429 11:35:07.221387 1237617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0429 11:35:07.227338 1237617 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0429 11:35:07.227410 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0429 11:35:07.239033 1237617 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0429 11:35:07.239104 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0429 11:35:07.277033 1237617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0429 11:35:07.279138 1237617 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0429 11:35:07.279225 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0429 11:35:07.302613 1237617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0429 11:35:07.336747 1237617 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0429 11:35:07.336820 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0429 11:35:07.359613 1237617 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0429 11:35:07.359688 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0429 11:35:07.381889 1237617 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0429 11:35:07.381961 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0429 11:35:07.482025 1237617 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0429 11:35:07.482095 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0429 11:35:07.486475 1237617 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0429 11:35:07.486542 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0429 11:35:07.594806 1237617 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0429 11:35:07.594876 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0429 11:35:07.636424 1237617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0429 11:35:07.703123 1237617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0429 11:35:07.759902 1237617 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0429 11:35:07.759926 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0429 11:35:07.921934 1237617 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0429 11:35:07.921960 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0429 11:35:08.059628 1237617 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0429 11:35:08.059690 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0429 11:35:08.199643 1237617 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0429 11:35:08.199715 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0429 11:35:08.307541 1237617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0429 11:35:11.469331 1237617 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.286223453s)
	I0429 11:35:11.469872 1237617 addons.go:470] Verifying addon ingress=true in "addons-760922"
	I0429 11:35:11.472151 1237617 out.go:177] * Verifying ingress addon...
	I0429 11:35:11.469433 1237617 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (5.262883746s)
	I0429 11:35:11.469549 1237617 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.132073692s)
	I0429 11:35:11.469591 1237617 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.045433569s)
	I0429 11:35:11.469638 1237617 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.762214126s)
	I0429 11:35:11.469657 1237617 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.692352673s)
	I0429 11:35:11.469691 1237617 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.637653235s)
	I0429 11:35:11.469706 1237617 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.611909685s)
	I0429 11:35:11.469734 1237617 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.248274658s)
	I0429 11:35:11.469778 1237617 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.192683179s)
	I0429 11:35:11.469826 1237617 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.167136092s)
	I0429 11:35:11.469504 1237617 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.263187301s)
	I0429 11:35:11.474651 1237617 node_ready.go:35] waiting up to 6m0s for node "addons-760922" to be "Ready" ...
	I0429 11:35:11.475487 1237617 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0429 11:35:11.475641 1237617 start.go:946] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0429 11:35:11.476525 1237617 addons.go:470] Verifying addon registry=true in "addons-760922"
	I0429 11:35:11.478902 1237617 out.go:177] * Verifying registry addon...
	I0429 11:35:11.476688 1237617 addons.go:470] Verifying addon metrics-server=true in "addons-760922"
	I0429 11:35:11.483966 1237617 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-760922 service yakd-dashboard -n yakd-dashboard
	
	I0429 11:35:11.481615 1237617 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0429 11:35:11.496239 1237617 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0429 11:35:11.496265 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:11.505495 1237617 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0429 11:35:11.505523 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0429 11:35:11.522950 1237617 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0429 11:35:11.664622 1237617 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.028025343s)
	W0429 11:35:11.664718 1237617 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0429 11:35:11.664775 1237617 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.961417844s)
	I0429 11:35:11.664753 1237617 retry.go:31] will retry after 153.199739ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0429 11:35:11.818980 1237617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0429 11:35:11.919552 1237617 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.611877449s)
	I0429 11:35:11.919640 1237617 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-760922"
	I0429 11:35:11.923843 1237617 out.go:177] * Verifying csi-hostpath-driver addon...
	I0429 11:35:11.926303 1237617 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0429 11:35:12.010536 1237617 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0429 11:35:12.010567 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:12.025496 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:12.051337 1237617 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-760922" context rescaled to 1 replicas
	I0429 11:35:12.070932 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:12.435757 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:12.482069 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:12.491729 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:12.931395 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:12.982713 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:12.990178 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:13.430902 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:13.479968 1237617 node_ready.go:53] node "addons-760922" has status "Ready":"False"
	I0429 11:35:13.483415 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:13.490566 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:13.931068 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:13.981603 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:13.994992 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:14.192598 1237617 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0429 11:35:14.192715 1237617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-760922
	I0429 11:35:14.215217 1237617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34278 SSHKeyPath:/home/jenkins/minikube-integration/18756-1231546/.minikube/machines/addons-760922/id_rsa Username:docker}
	I0429 11:35:14.336529 1237617 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0429 11:35:14.358990 1237617 addons.go:234] Setting addon gcp-auth=true in "addons-760922"
	I0429 11:35:14.359044 1237617 host.go:66] Checking if "addons-760922" exists ...
	I0429 11:35:14.359499 1237617 cli_runner.go:164] Run: docker container inspect addons-760922 --format={{.State.Status}}
	I0429 11:35:14.390174 1237617 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0429 11:35:14.390226 1237617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-760922
	I0429 11:35:14.409026 1237617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34278 SSHKeyPath:/home/jenkins/minikube-integration/18756-1231546/.minikube/machines/addons-760922/id_rsa Username:docker}
	I0429 11:35:14.437549 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:14.505152 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:14.508764 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:14.931060 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:14.982378 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:14.989444 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:14.991091 1237617 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.171997425s)
	I0429 11:35:14.993594 1237617 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0429 11:35:14.995787 1237617 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0429 11:35:14.998206 1237617 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0429 11:35:14.998231 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0429 11:35:15.032287 1237617 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0429 11:35:15.032320 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0429 11:35:15.064481 1237617 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0429 11:35:15.064515 1237617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0429 11:35:15.090795 1237617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0429 11:35:15.431860 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:15.486778 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:15.500808 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:15.800111 1237617 addons.go:470] Verifying addon gcp-auth=true in "addons-760922"
	I0429 11:35:15.805931 1237617 out.go:177] * Verifying gcp-auth addon...
	I0429 11:35:15.808567 1237617 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0429 11:35:15.825697 1237617 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0429 11:35:15.825723 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:15.931520 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:15.981408 1237617 node_ready.go:53] node "addons-760922" has status "Ready":"False"
	I0429 11:35:15.982948 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:16.011320 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:16.311975 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:16.431566 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:16.482627 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:16.491362 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:16.815462 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:16.931327 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:16.985205 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:16.990569 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:17.313082 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:17.431669 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:17.481757 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:17.489472 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:17.812523 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:17.931015 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:17.979775 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:17.989589 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:18.312507 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:18.431016 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:18.478063 1237617 node_ready.go:53] node "addons-760922" has status "Ready":"False"
	I0429 11:35:18.480047 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:18.489740 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:18.812043 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:18.932246 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:18.981504 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:18.990526 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:19.312118 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:19.431406 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:19.479732 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:19.489258 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:19.815555 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:19.931117 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:19.981553 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:19.990363 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:20.312465 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:20.430618 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:20.479688 1237617 node_ready.go:53] node "addons-760922" has status "Ready":"False"
	I0429 11:35:20.480647 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:20.489319 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:20.811751 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:20.930954 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:20.980968 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:20.990003 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:21.312496 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:21.430628 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:21.479745 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:21.489937 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:21.812123 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:21.930691 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:21.979790 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:21.989535 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:22.312617 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:22.431042 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:22.480813 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:22.489318 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:22.812795 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:22.931029 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:22.978998 1237617 node_ready.go:53] node "addons-760922" has status "Ready":"False"
	I0429 11:35:22.979504 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:22.990102 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:23.312831 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:23.430660 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:23.479678 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:23.489487 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:23.813087 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:23.930832 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:23.979826 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:23.989516 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:24.311894 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:24.430587 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:24.479406 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:24.490052 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:24.811976 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:24.933639 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:24.980261 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:24.990393 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:25.312578 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:25.430628 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:25.477558 1237617 node_ready.go:53] node "addons-760922" has status "Ready":"False"
	I0429 11:35:25.479949 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:25.490387 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:25.811623 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:25.931004 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:25.981637 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:25.989432 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:26.312604 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:26.430452 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:26.480346 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:26.490008 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:26.812656 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:26.930412 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:26.980511 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:26.989243 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:27.312280 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:27.431088 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:27.477776 1237617 node_ready.go:53] node "addons-760922" has status "Ready":"False"
	I0429 11:35:27.479638 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:27.490227 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:27.812198 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:27.930150 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:27.979658 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:27.990482 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:28.312515 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:28.431264 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:28.480594 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:28.489352 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:28.812460 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:28.931131 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:28.979413 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:28.990100 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:29.312627 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:29.430204 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:29.478269 1237617 node_ready.go:53] node "addons-760922" has status "Ready":"False"
	I0429 11:35:29.479357 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:29.489667 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:29.812284 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:29.931132 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:29.979615 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:29.989890 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:30.311851 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:30.430524 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:30.480773 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:30.489528 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:30.812465 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:30.931130 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:30.980166 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:30.990276 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:31.312299 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:31.430305 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:31.479966 1237617 node_ready.go:53] node "addons-760922" has status "Ready":"False"
	I0429 11:35:31.480352 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:31.489867 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:31.811800 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:31.930510 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:31.980282 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:31.989986 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:32.312596 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:32.430821 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:32.480165 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:32.489819 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:32.812393 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:32.930531 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:32.980241 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:32.990089 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:33.312489 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:33.431454 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:33.479867 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:33.489526 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:33.811899 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:33.931380 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:33.978864 1237617 node_ready.go:53] node "addons-760922" has status "Ready":"False"
	I0429 11:35:33.979622 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:33.989389 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:34.312618 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:34.430585 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:34.480330 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:34.490225 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:34.812078 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:34.931079 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:34.980151 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:34.999206 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:35.312350 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:35.430510 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:35.479777 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:35.489687 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:35.811599 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:35.930929 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:35.980014 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:35.990123 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:36.312120 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:36.431283 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:36.479887 1237617 node_ready.go:53] node "addons-760922" has status "Ready":"False"
	I0429 11:35:36.480278 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:36.489762 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:36.811607 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:36.931023 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:36.979948 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:36.990246 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:37.312583 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:37.430366 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:37.479980 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:37.489672 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:37.812114 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:37.931690 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:37.981371 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:37.990491 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:38.311800 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:38.434109 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:38.480138 1237617 node_ready.go:53] node "addons-760922" has status "Ready":"False"
	I0429 11:35:38.482466 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:38.489293 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:38.815345 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:38.998551 1237617 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0429 11:35:38.998623 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:39.004218 1237617 node_ready.go:49] node "addons-760922" has status "Ready":"True"
	I0429 11:35:39.004302 1237617 node_ready.go:38] duration metric: took 27.52961361s for node "addons-760922" to be "Ready" ...
	I0429 11:35:39.004339 1237617 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 11:35:39.027936 1237617 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0429 11:35:39.028010 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:39.032489 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:39.078274 1237617 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-hsk8z" in "kube-system" namespace to be "Ready" ...
	I0429 11:35:39.314809 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:39.436472 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:39.482310 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:39.510294 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:39.811568 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:39.931842 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:39.980402 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:39.990886 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:40.313663 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:40.433992 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:40.509293 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:40.510223 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:40.830470 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:40.932487 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:40.980553 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:40.992834 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:41.084986 1237617 pod_ready.go:92] pod "coredns-7db6d8ff4d-hsk8z" in "kube-system" namespace has status "Ready":"True"
	I0429 11:35:41.085016 1237617 pod_ready.go:81] duration metric: took 2.006666316s for pod "coredns-7db6d8ff4d-hsk8z" in "kube-system" namespace to be "Ready" ...
	I0429 11:35:41.085041 1237617 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-760922" in "kube-system" namespace to be "Ready" ...
	I0429 11:35:41.090770 1237617 pod_ready.go:92] pod "etcd-addons-760922" in "kube-system" namespace has status "Ready":"True"
	I0429 11:35:41.090795 1237617 pod_ready.go:81] duration metric: took 5.746347ms for pod "etcd-addons-760922" in "kube-system" namespace to be "Ready" ...
	I0429 11:35:41.090810 1237617 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-760922" in "kube-system" namespace to be "Ready" ...
	I0429 11:35:41.096191 1237617 pod_ready.go:92] pod "kube-apiserver-addons-760922" in "kube-system" namespace has status "Ready":"True"
	I0429 11:35:41.096217 1237617 pod_ready.go:81] duration metric: took 5.399526ms for pod "kube-apiserver-addons-760922" in "kube-system" namespace to be "Ready" ...
	I0429 11:35:41.096230 1237617 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-760922" in "kube-system" namespace to be "Ready" ...
	I0429 11:35:41.108991 1237617 pod_ready.go:92] pod "kube-controller-manager-addons-760922" in "kube-system" namespace has status "Ready":"True"
	I0429 11:35:41.109036 1237617 pod_ready.go:81] duration metric: took 12.779373ms for pod "kube-controller-manager-addons-760922" in "kube-system" namespace to be "Ready" ...
	I0429 11:35:41.109050 1237617 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-w598j" in "kube-system" namespace to be "Ready" ...
	I0429 11:35:41.118598 1237617 pod_ready.go:92] pod "kube-proxy-w598j" in "kube-system" namespace has status "Ready":"True"
	I0429 11:35:41.118626 1237617 pod_ready.go:81] duration metric: took 9.567232ms for pod "kube-proxy-w598j" in "kube-system" namespace to be "Ready" ...
	I0429 11:35:41.118639 1237617 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-760922" in "kube-system" namespace to be "Ready" ...
	I0429 11:35:41.312080 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:41.434984 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:41.485004 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:41.487686 1237617 pod_ready.go:92] pod "kube-scheduler-addons-760922" in "kube-system" namespace has status "Ready":"True"
	I0429 11:35:41.487712 1237617 pod_ready.go:81] duration metric: took 369.065003ms for pod "kube-scheduler-addons-760922" in "kube-system" namespace to be "Ready" ...
	I0429 11:35:41.487724 1237617 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-t8bst" in "kube-system" namespace to be "Ready" ...
	I0429 11:35:41.493841 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:41.813004 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:41.931789 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:41.979655 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:41.991891 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:42.312757 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:42.433630 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:42.480609 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:42.495667 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:42.813046 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:42.938434 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:42.981076 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:42.992285 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:43.312550 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:43.432908 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:43.480129 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:43.490518 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:43.494649 1237617 pod_ready.go:102] pod "metrics-server-c59844bb4-t8bst" in "kube-system" namespace has status "Ready":"False"
	I0429 11:35:43.812452 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:43.932825 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:43.980135 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:43.990831 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:44.312731 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:44.432306 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:44.482407 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:44.491427 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:44.813064 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:44.933612 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:44.982171 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:44.991015 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:45.314490 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:45.432477 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:45.480847 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:45.493238 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:45.496279 1237617 pod_ready.go:102] pod "metrics-server-c59844bb4-t8bst" in "kube-system" namespace has status "Ready":"False"
	I0429 11:35:45.812286 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:45.933580 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:45.982280 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:45.992133 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:46.312592 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:46.433202 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:46.481132 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:46.491984 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:46.817523 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:46.932927 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:46.980085 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:46.992352 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:47.313144 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:47.433936 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:47.480728 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:47.492040 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:47.501694 1237617 pod_ready.go:102] pod "metrics-server-c59844bb4-t8bst" in "kube-system" namespace has status "Ready":"False"
	I0429 11:35:47.818132 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:47.939084 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:47.981847 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:48.009837 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:48.314894 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:48.433353 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:48.481326 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:48.492588 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:48.816192 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:48.935581 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:48.984036 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:49.008137 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:49.313250 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:49.434155 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:49.481467 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:49.528177 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:49.531458 1237617 pod_ready.go:102] pod "metrics-server-c59844bb4-t8bst" in "kube-system" namespace has status "Ready":"False"
	I0429 11:35:49.826944 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:49.961261 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:49.981180 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:50.002836 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:50.314743 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:50.440501 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:50.483103 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:50.501197 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:50.813194 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:50.937151 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:50.981913 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:50.991474 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:51.314082 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:51.434689 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:51.482786 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:51.493467 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:51.813447 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:51.931933 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:51.980048 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:51.991913 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:51.994475 1237617 pod_ready.go:102] pod "metrics-server-c59844bb4-t8bst" in "kube-system" namespace has status "Ready":"False"
	I0429 11:35:52.312560 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:52.432702 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:52.479906 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:52.491501 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:52.818552 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:52.932582 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:52.979864 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:52.989940 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:53.314700 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:53.434194 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:53.484678 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:53.495492 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:53.813213 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:53.933758 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:53.981504 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:53.992624 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:54.024113 1237617 pod_ready.go:102] pod "metrics-server-c59844bb4-t8bst" in "kube-system" namespace has status "Ready":"False"
	I0429 11:35:54.313023 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:54.433173 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:54.480893 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:54.490892 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:54.813761 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:54.934168 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:54.984396 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:54.990982 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:55.312598 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:55.433733 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:55.480540 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:55.493548 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:55.812557 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:55.934926 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:56.006118 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:56.034630 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:56.036937 1237617 pod_ready.go:102] pod "metrics-server-c59844bb4-t8bst" in "kube-system" namespace has status "Ready":"False"
	I0429 11:35:56.312380 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:56.432518 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:56.480471 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:56.491429 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:56.812970 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:56.932703 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:57.007914 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:57.019533 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:57.316921 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:57.435047 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:57.481075 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:57.520881 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:57.813020 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:57.932401 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:57.981565 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:57.998066 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:58.312117 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:58.432375 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:58.479663 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:58.491683 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:58.495015 1237617 pod_ready.go:102] pod "metrics-server-c59844bb4-t8bst" in "kube-system" namespace has status "Ready":"False"
	I0429 11:35:58.812849 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:58.937297 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:58.980402 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:58.991384 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:59.313237 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:59.435080 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:59.480751 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:59.492323 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:35:59.812589 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:35:59.932435 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:35:59.981395 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:35:59.995724 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:00.329110 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:00.441925 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:00.486657 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:00.493293 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:00.497726 1237617 pod_ready.go:102] pod "metrics-server-c59844bb4-t8bst" in "kube-system" namespace has status "Ready":"False"
	I0429 11:36:00.812510 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:00.933315 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:00.980850 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:00.991696 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:01.312741 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:01.433297 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:01.481232 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:01.495230 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:01.813036 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:01.934908 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:01.980414 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:01.992322 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:02.312272 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:02.431835 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:02.479780 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:02.493496 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:02.812692 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:02.932585 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:02.980467 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:02.990764 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:02.995298 1237617 pod_ready.go:102] pod "metrics-server-c59844bb4-t8bst" in "kube-system" namespace has status "Ready":"False"
	I0429 11:36:03.312178 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:03.431540 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:03.480499 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:03.496700 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:03.812853 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:03.933596 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:03.980303 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:03.993258 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:04.313223 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:04.434017 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:04.481297 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:04.493022 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:04.813488 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:04.937546 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:04.980930 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:04.991454 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:05.013957 1237617 pod_ready.go:102] pod "metrics-server-c59844bb4-t8bst" in "kube-system" namespace has status "Ready":"False"
	I0429 11:36:05.313148 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:05.435078 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:05.486226 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:05.501530 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:05.812891 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:05.933081 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:05.980508 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:05.994408 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:06.312582 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:06.433411 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:06.480817 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:06.492194 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:06.812590 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:06.932695 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:06.980626 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:06.990359 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:07.312339 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:07.432991 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:07.481059 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:07.491021 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:07.496724 1237617 pod_ready.go:102] pod "metrics-server-c59844bb4-t8bst" in "kube-system" namespace has status "Ready":"False"
	I0429 11:36:07.812355 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:07.931579 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:07.980274 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:07.990698 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:08.312152 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:08.432081 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:08.480019 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:08.492521 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:08.812637 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:08.958437 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:08.985493 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:09.014069 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:09.313186 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:09.433924 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:09.482095 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:09.491047 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:09.498890 1237617 pod_ready.go:102] pod "metrics-server-c59844bb4-t8bst" in "kube-system" namespace has status "Ready":"False"
	I0429 11:36:09.812971 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:09.932164 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:09.980880 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:10.016032 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:10.313582 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:10.432275 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:10.480299 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:10.492761 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:10.812475 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:10.932660 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:10.995478 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:10.999156 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:11.315882 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:11.432114 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:11.480489 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:11.491863 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:11.811989 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:11.933279 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:11.981019 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:11.991643 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:12.001463 1237617 pod_ready.go:102] pod "metrics-server-c59844bb4-t8bst" in "kube-system" namespace has status "Ready":"False"
	I0429 11:36:12.312334 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:12.432749 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:12.480728 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:12.496986 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:12.812590 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:12.934119 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:12.989969 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:13.023998 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:13.313914 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:13.432908 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:13.482913 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:13.503799 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:13.812651 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:13.932963 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:13.983802 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:13.996553 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:14.022498 1237617 pod_ready.go:102] pod "metrics-server-c59844bb4-t8bst" in "kube-system" namespace has status "Ready":"False"
	I0429 11:36:14.312912 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:14.432767 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:14.480557 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:14.491380 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:14.811803 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:14.933972 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:14.981003 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:14.992408 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:15.314272 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:15.432111 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:15.480285 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:15.490981 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:15.811992 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:15.933212 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:15.980110 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:15.992103 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:16.316376 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:16.431620 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:16.479839 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:16.490336 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:16.495616 1237617 pod_ready.go:102] pod "metrics-server-c59844bb4-t8bst" in "kube-system" namespace has status "Ready":"False"
	I0429 11:36:16.813088 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:16.942794 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:16.980967 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:16.999537 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:17.314752 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:17.433076 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:17.480732 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:17.491517 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:17.812593 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:17.932159 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:17.980458 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:17.992452 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:18.312399 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:18.432135 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:18.480252 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:18.491224 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:18.499395 1237617 pod_ready.go:102] pod "metrics-server-c59844bb4-t8bst" in "kube-system" namespace has status "Ready":"False"
	I0429 11:36:18.817680 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:18.933555 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:18.979686 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:18.990349 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:19.312854 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:19.433015 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:19.481699 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:19.501315 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:19.813506 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:19.935990 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:19.980825 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:19.995890 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:20.312726 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:20.442476 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:20.479750 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:20.491710 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:20.812343 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:20.933918 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:20.980373 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:20.992411 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:21.000242 1237617 pod_ready.go:102] pod "metrics-server-c59844bb4-t8bst" in "kube-system" namespace has status "Ready":"False"
	I0429 11:36:21.312792 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:21.433429 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:21.480796 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:21.493357 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:21.813204 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:21.937503 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:21.982737 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:22.017612 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:22.312908 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:22.435063 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:22.480071 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:22.492205 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:22.814006 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:22.932715 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:22.984445 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:23.006121 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:23.006586 1237617 pod_ready.go:102] pod "metrics-server-c59844bb4-t8bst" in "kube-system" namespace has status "Ready":"False"
	I0429 11:36:23.313265 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:23.432352 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:23.480384 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:23.504044 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:23.812809 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:23.933876 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:23.980529 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:23.994516 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 11:36:24.321879 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:24.434929 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:24.482511 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:24.493724 1237617 kapi.go:107] duration metric: took 1m13.012100068s to wait for kubernetes.io/minikube-addons=registry ...
	I0429 11:36:24.812358 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:24.933322 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:24.980393 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:25.315581 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:25.440244 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:25.481553 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:25.497571 1237617 pod_ready.go:102] pod "metrics-server-c59844bb4-t8bst" in "kube-system" namespace has status "Ready":"False"
	I0429 11:36:25.813056 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:25.934239 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:25.980814 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:26.312412 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:26.433201 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:26.480945 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:26.813187 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:26.933703 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:26.981699 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:27.315958 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:27.437309 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:27.481843 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:27.812380 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:27.933040 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:27.981266 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:28.022251 1237617 pod_ready.go:102] pod "metrics-server-c59844bb4-t8bst" in "kube-system" namespace has status "Ready":"False"
	I0429 11:36:28.312925 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:28.433042 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:28.488579 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:28.812272 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:28.932536 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:28.980204 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:29.343789 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:29.432684 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:29.480949 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:29.812632 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:29.932764 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:29.980327 1237617 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 11:36:30.313595 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:30.433028 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:30.485401 1237617 kapi.go:107] duration metric: took 1m19.009910497s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0429 11:36:30.499117 1237617 pod_ready.go:102] pod "metrics-server-c59844bb4-t8bst" in "kube-system" namespace has status "Ready":"False"
	I0429 11:36:30.814545 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:30.932812 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:31.315983 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:31.433915 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:31.812411 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:31.932400 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:32.314514 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:32.434507 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:32.816925 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:32.935959 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:32.997219 1237617 pod_ready.go:102] pod "metrics-server-c59844bb4-t8bst" in "kube-system" namespace has status "Ready":"False"
	I0429 11:36:33.317120 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:33.432572 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:33.812626 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:33.932934 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:34.314617 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:34.436072 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:34.811936 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:34.932560 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:35.312639 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:35.434326 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:35.495516 1237617 pod_ready.go:102] pod "metrics-server-c59844bb4-t8bst" in "kube-system" namespace has status "Ready":"False"
	I0429 11:36:35.812640 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:35.933202 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:36.313024 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:36.435262 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:36.812102 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:36.933725 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:37.312349 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:37.433336 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:37.812358 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:37.931557 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:37.994307 1237617 pod_ready.go:102] pod "metrics-server-c59844bb4-t8bst" in "kube-system" namespace has status "Ready":"False"
	I0429 11:36:38.312568 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:38.436881 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:38.813498 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:38.932524 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:39.312860 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:39.432098 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 11:36:39.811682 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:39.931718 1237617 kapi.go:107] duration metric: took 1m28.005415179s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0429 11:36:39.995750 1237617 pod_ready.go:102] pod "metrics-server-c59844bb4-t8bst" in "kube-system" namespace has status "Ready":"False"
	I0429 11:36:40.313018 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:40.812719 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:41.312661 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:41.812505 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:42.313031 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:42.494481 1237617 pod_ready.go:102] pod "metrics-server-c59844bb4-t8bst" in "kube-system" namespace has status "Ready":"False"
	I0429 11:36:42.812926 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:43.312756 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:43.813008 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:44.311979 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:44.813800 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:44.993725 1237617 pod_ready.go:102] pod "metrics-server-c59844bb4-t8bst" in "kube-system" namespace has status "Ready":"False"
	I0429 11:36:45.313353 1237617 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 11:36:45.812496 1237617 kapi.go:107] duration metric: took 1m30.003934514s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0429 11:36:45.814419 1237617 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-760922 cluster.
	I0429 11:36:45.816192 1237617 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0429 11:36:45.818001 1237617 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0429 11:36:45.820140 1237617 out.go:177] * Enabled addons: ingress-dns, cloud-spanner, storage-provisioner, nvidia-device-plugin, metrics-server, yakd, default-storageclass, inspektor-gadget, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0429 11:36:45.822069 1237617 addons.go:505] duration metric: took 1m40.180633059s for enable addons: enabled=[ingress-dns cloud-spanner storage-provisioner nvidia-device-plugin metrics-server yakd default-storageclass inspektor-gadget volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0429 11:36:46.994012 1237617 pod_ready.go:102] pod "metrics-server-c59844bb4-t8bst" in "kube-system" namespace has status "Ready":"False"
	I0429 11:36:49.493917 1237617 pod_ready.go:102] pod "metrics-server-c59844bb4-t8bst" in "kube-system" namespace has status "Ready":"False"
	I0429 11:36:51.494087 1237617 pod_ready.go:102] pod "metrics-server-c59844bb4-t8bst" in "kube-system" namespace has status "Ready":"False"
	I0429 11:36:53.996275 1237617 pod_ready.go:102] pod "metrics-server-c59844bb4-t8bst" in "kube-system" namespace has status "Ready":"False"
	I0429 11:36:56.494729 1237617 pod_ready.go:92] pod "metrics-server-c59844bb4-t8bst" in "kube-system" namespace has status "Ready":"True"
	I0429 11:36:56.494757 1237617 pod_ready.go:81] duration metric: took 1m15.007025308s for pod "metrics-server-c59844bb4-t8bst" in "kube-system" namespace to be "Ready" ...
	I0429 11:36:56.494771 1237617 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-7lk7c" in "kube-system" namespace to be "Ready" ...
	I0429 11:36:56.503538 1237617 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-7lk7c" in "kube-system" namespace has status "Ready":"True"
	I0429 11:36:56.503565 1237617 pod_ready.go:81] duration metric: took 8.786453ms for pod "nvidia-device-plugin-daemonset-7lk7c" in "kube-system" namespace to be "Ready" ...
	I0429 11:36:56.503587 1237617 pod_ready.go:38] duration metric: took 1m17.499204851s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 11:36:56.503637 1237617 api_server.go:52] waiting for apiserver process to appear ...
	I0429 11:36:56.503684 1237617 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 11:36:56.503748 1237617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 11:36:56.557190 1237617 cri.go:89] found id: "a7a7309bbe8794b408f94c36d53409e0b22d3fc5e1565bf369b327e3b1812123"
	I0429 11:36:56.557221 1237617 cri.go:89] found id: ""
	I0429 11:36:56.557230 1237617 logs.go:276] 1 containers: [a7a7309bbe8794b408f94c36d53409e0b22d3fc5e1565bf369b327e3b1812123]
	I0429 11:36:56.557292 1237617 ssh_runner.go:195] Run: which crictl
	I0429 11:36:56.561667 1237617 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 11:36:56.561735 1237617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 11:36:56.604310 1237617 cri.go:89] found id: "18d8a3169373e74dc7d6856fbd9da3eac329d95311f62caae457e2f407b38092"
	I0429 11:36:56.604344 1237617 cri.go:89] found id: ""
	I0429 11:36:56.604353 1237617 logs.go:276] 1 containers: [18d8a3169373e74dc7d6856fbd9da3eac329d95311f62caae457e2f407b38092]
	I0429 11:36:56.604406 1237617 ssh_runner.go:195] Run: which crictl
	I0429 11:36:56.608066 1237617 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 11:36:56.608150 1237617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 11:36:56.650509 1237617 cri.go:89] found id: "f5aa390616f688ef8f9e965990765e53de3623070f536ce57d6c39292d45b853"
	I0429 11:36:56.650532 1237617 cri.go:89] found id: ""
	I0429 11:36:56.650541 1237617 logs.go:276] 1 containers: [f5aa390616f688ef8f9e965990765e53de3623070f536ce57d6c39292d45b853]
	I0429 11:36:56.650597 1237617 ssh_runner.go:195] Run: which crictl
	I0429 11:36:56.654183 1237617 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 11:36:56.654275 1237617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 11:36:56.700503 1237617 cri.go:89] found id: "0b4bf008d831035b0b1bdbf378c65024e650de9cc13691a43604af72b5561998"
	I0429 11:36:56.700528 1237617 cri.go:89] found id: ""
	I0429 11:36:56.700542 1237617 logs.go:276] 1 containers: [0b4bf008d831035b0b1bdbf378c65024e650de9cc13691a43604af72b5561998]
	I0429 11:36:56.700600 1237617 ssh_runner.go:195] Run: which crictl
	I0429 11:36:56.704099 1237617 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 11:36:56.704167 1237617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 11:36:56.745555 1237617 cri.go:89] found id: "f6d37371c711ba1179bb1a3cdba8018b5ce1f590a9bd4536719eb0f8757eb648"
	I0429 11:36:56.745575 1237617 cri.go:89] found id: ""
	I0429 11:36:56.745583 1237617 logs.go:276] 1 containers: [f6d37371c711ba1179bb1a3cdba8018b5ce1f590a9bd4536719eb0f8757eb648]
	I0429 11:36:56.745639 1237617 ssh_runner.go:195] Run: which crictl
	I0429 11:36:56.749072 1237617 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 11:36:56.749155 1237617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 11:36:56.803115 1237617 cri.go:89] found id: "836169ee36c10d25a5b392d25a39dc752379d87a719cdb052c530d840b879fb4"
	I0429 11:36:56.803141 1237617 cri.go:89] found id: ""
	I0429 11:36:56.803150 1237617 logs.go:276] 1 containers: [836169ee36c10d25a5b392d25a39dc752379d87a719cdb052c530d840b879fb4]
	I0429 11:36:56.803214 1237617 ssh_runner.go:195] Run: which crictl
	I0429 11:36:56.807011 1237617 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 11:36:56.807080 1237617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 11:36:56.848511 1237617 cri.go:89] found id: "d1c2bca5742231f59a4e7d5ebfca2b0ff8a43e410c36f420cb302b1714fcb172"
	I0429 11:36:56.848534 1237617 cri.go:89] found id: ""
	I0429 11:36:56.848542 1237617 logs.go:276] 1 containers: [d1c2bca5742231f59a4e7d5ebfca2b0ff8a43e410c36f420cb302b1714fcb172]
	I0429 11:36:56.848598 1237617 ssh_runner.go:195] Run: which crictl
	I0429 11:36:56.852131 1237617 logs.go:123] Gathering logs for dmesg ...
	I0429 11:36:56.852157 1237617 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 11:36:56.871317 1237617 logs.go:123] Gathering logs for kube-apiserver [a7a7309bbe8794b408f94c36d53409e0b22d3fc5e1565bf369b327e3b1812123] ...
	I0429 11:36:56.871344 1237617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a7a7309bbe8794b408f94c36d53409e0b22d3fc5e1565bf369b327e3b1812123"
	I0429 11:36:56.945284 1237617 logs.go:123] Gathering logs for etcd [18d8a3169373e74dc7d6856fbd9da3eac329d95311f62caae457e2f407b38092] ...
	I0429 11:36:56.945317 1237617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18d8a3169373e74dc7d6856fbd9da3eac329d95311f62caae457e2f407b38092"
	I0429 11:36:56.991448 1237617 logs.go:123] Gathering logs for kube-scheduler [0b4bf008d831035b0b1bdbf378c65024e650de9cc13691a43604af72b5561998] ...
	I0429 11:36:56.991479 1237617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b4bf008d831035b0b1bdbf378c65024e650de9cc13691a43604af72b5561998"
	I0429 11:36:57.040115 1237617 logs.go:123] Gathering logs for kube-controller-manager [836169ee36c10d25a5b392d25a39dc752379d87a719cdb052c530d840b879fb4] ...
	I0429 11:36:57.040145 1237617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 836169ee36c10d25a5b392d25a39dc752379d87a719cdb052c530d840b879fb4"
	I0429 11:36:57.109251 1237617 logs.go:123] Gathering logs for kubelet ...
	I0429 11:36:57.109286 1237617 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0429 11:36:57.162562 1237617 logs.go:138] Found kubelet problem: Apr 29 11:35:38 addons-760922 kubelet[1488]: W0429 11:35:38.678625    1488 reflector.go:547] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-760922" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-760922' and this object
	W0429 11:36:57.162815 1237617 logs.go:138] Found kubelet problem: Apr 29 11:35:38 addons-760922 kubelet[1488]: E0429 11:35:38.678665    1488 reflector.go:150] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-760922" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-760922' and this object
	I0429 11:36:57.208029 1237617 logs.go:123] Gathering logs for describe nodes ...
	I0429 11:36:57.208065 1237617 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 11:36:57.391682 1237617 logs.go:123] Gathering logs for coredns [f5aa390616f688ef8f9e965990765e53de3623070f536ce57d6c39292d45b853] ...
	I0429 11:36:57.391711 1237617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5aa390616f688ef8f9e965990765e53de3623070f536ce57d6c39292d45b853"
	I0429 11:36:57.435484 1237617 logs.go:123] Gathering logs for kube-proxy [f6d37371c711ba1179bb1a3cdba8018b5ce1f590a9bd4536719eb0f8757eb648] ...
	I0429 11:36:57.435515 1237617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6d37371c711ba1179bb1a3cdba8018b5ce1f590a9bd4536719eb0f8757eb648"
	I0429 11:36:57.478379 1237617 logs.go:123] Gathering logs for kindnet [d1c2bca5742231f59a4e7d5ebfca2b0ff8a43e410c36f420cb302b1714fcb172] ...
	I0429 11:36:57.478409 1237617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1c2bca5742231f59a4e7d5ebfca2b0ff8a43e410c36f420cb302b1714fcb172"
	I0429 11:36:57.517202 1237617 logs.go:123] Gathering logs for CRI-O ...
	I0429 11:36:57.517229 1237617 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 11:36:57.612778 1237617 logs.go:123] Gathering logs for container status ...
	I0429 11:36:57.612814 1237617 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 11:36:57.674139 1237617 out.go:304] Setting ErrFile to fd 2...
	I0429 11:36:57.674168 1237617 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0429 11:36:57.674242 1237617 out.go:239] X Problems detected in kubelet:
	W0429 11:36:57.674256 1237617 out.go:239]   Apr 29 11:35:38 addons-760922 kubelet[1488]: W0429 11:35:38.678625    1488 reflector.go:547] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-760922" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-760922' and this object
	W0429 11:36:57.674265 1237617 out.go:239]   Apr 29 11:35:38 addons-760922 kubelet[1488]: E0429 11:35:38.678665    1488 reflector.go:150] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-760922" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-760922' and this object
	I0429 11:36:57.674425 1237617 out.go:304] Setting ErrFile to fd 2...
	I0429 11:36:57.674434 1237617 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 11:37:07.676508 1237617 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 11:37:07.690220 1237617 api_server.go:72] duration metric: took 2m2.049038627s to wait for apiserver process to appear ...
	I0429 11:37:07.690245 1237617 api_server.go:88] waiting for apiserver healthz status ...
	I0429 11:37:07.690279 1237617 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 11:37:07.690351 1237617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 11:37:07.730930 1237617 cri.go:89] found id: "a7a7309bbe8794b408f94c36d53409e0b22d3fc5e1565bf369b327e3b1812123"
	I0429 11:37:07.730956 1237617 cri.go:89] found id: ""
	I0429 11:37:07.730964 1237617 logs.go:276] 1 containers: [a7a7309bbe8794b408f94c36d53409e0b22d3fc5e1565bf369b327e3b1812123]
	I0429 11:37:07.731023 1237617 ssh_runner.go:195] Run: which crictl
	I0429 11:37:07.734528 1237617 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 11:37:07.734613 1237617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 11:37:07.776755 1237617 cri.go:89] found id: "18d8a3169373e74dc7d6856fbd9da3eac329d95311f62caae457e2f407b38092"
	I0429 11:37:07.776779 1237617 cri.go:89] found id: ""
	I0429 11:37:07.776788 1237617 logs.go:276] 1 containers: [18d8a3169373e74dc7d6856fbd9da3eac329d95311f62caae457e2f407b38092]
	I0429 11:37:07.776846 1237617 ssh_runner.go:195] Run: which crictl
	I0429 11:37:07.780552 1237617 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 11:37:07.780624 1237617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 11:37:07.817292 1237617 cri.go:89] found id: "f5aa390616f688ef8f9e965990765e53de3623070f536ce57d6c39292d45b853"
	I0429 11:37:07.817313 1237617 cri.go:89] found id: ""
	I0429 11:37:07.817320 1237617 logs.go:276] 1 containers: [f5aa390616f688ef8f9e965990765e53de3623070f536ce57d6c39292d45b853]
	I0429 11:37:07.817395 1237617 ssh_runner.go:195] Run: which crictl
	I0429 11:37:07.820907 1237617 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 11:37:07.820974 1237617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 11:37:07.868213 1237617 cri.go:89] found id: "0b4bf008d831035b0b1bdbf378c65024e650de9cc13691a43604af72b5561998"
	I0429 11:37:07.868234 1237617 cri.go:89] found id: ""
	I0429 11:37:07.868242 1237617 logs.go:276] 1 containers: [0b4bf008d831035b0b1bdbf378c65024e650de9cc13691a43604af72b5561998]
	I0429 11:37:07.868328 1237617 ssh_runner.go:195] Run: which crictl
	I0429 11:37:07.871813 1237617 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 11:37:07.871884 1237617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 11:37:07.913835 1237617 cri.go:89] found id: "f6d37371c711ba1179bb1a3cdba8018b5ce1f590a9bd4536719eb0f8757eb648"
	I0429 11:37:07.913862 1237617 cri.go:89] found id: ""
	I0429 11:37:07.913871 1237617 logs.go:276] 1 containers: [f6d37371c711ba1179bb1a3cdba8018b5ce1f590a9bd4536719eb0f8757eb648]
	I0429 11:37:07.913953 1237617 ssh_runner.go:195] Run: which crictl
	I0429 11:37:07.917724 1237617 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 11:37:07.917796 1237617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 11:37:07.961900 1237617 cri.go:89] found id: "836169ee36c10d25a5b392d25a39dc752379d87a719cdb052c530d840b879fb4"
	I0429 11:37:07.961926 1237617 cri.go:89] found id: ""
	I0429 11:37:07.961935 1237617 logs.go:276] 1 containers: [836169ee36c10d25a5b392d25a39dc752379d87a719cdb052c530d840b879fb4]
	I0429 11:37:07.962004 1237617 ssh_runner.go:195] Run: which crictl
	I0429 11:37:07.965524 1237617 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 11:37:07.965595 1237617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 11:37:08.011070 1237617 cri.go:89] found id: "d1c2bca5742231f59a4e7d5ebfca2b0ff8a43e410c36f420cb302b1714fcb172"
	I0429 11:37:08.011095 1237617 cri.go:89] found id: ""
	I0429 11:37:08.011104 1237617 logs.go:276] 1 containers: [d1c2bca5742231f59a4e7d5ebfca2b0ff8a43e410c36f420cb302b1714fcb172]
	I0429 11:37:08.011170 1237617 ssh_runner.go:195] Run: which crictl
	I0429 11:37:08.014833 1237617 logs.go:123] Gathering logs for kube-scheduler [0b4bf008d831035b0b1bdbf378c65024e650de9cc13691a43604af72b5561998] ...
	I0429 11:37:08.014861 1237617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b4bf008d831035b0b1bdbf378c65024e650de9cc13691a43604af72b5561998"
	I0429 11:37:08.058312 1237617 logs.go:123] Gathering logs for kube-controller-manager [836169ee36c10d25a5b392d25a39dc752379d87a719cdb052c530d840b879fb4] ...
	I0429 11:37:08.058343 1237617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 836169ee36c10d25a5b392d25a39dc752379d87a719cdb052c530d840b879fb4"
	I0429 11:37:08.127521 1237617 logs.go:123] Gathering logs for kindnet [d1c2bca5742231f59a4e7d5ebfca2b0ff8a43e410c36f420cb302b1714fcb172] ...
	I0429 11:37:08.127555 1237617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1c2bca5742231f59a4e7d5ebfca2b0ff8a43e410c36f420cb302b1714fcb172"
	I0429 11:37:08.166924 1237617 logs.go:123] Gathering logs for dmesg ...
	I0429 11:37:08.166950 1237617 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 11:37:08.185349 1237617 logs.go:123] Gathering logs for describe nodes ...
	I0429 11:37:08.185379 1237617 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 11:37:08.339169 1237617 logs.go:123] Gathering logs for kube-apiserver [a7a7309bbe8794b408f94c36d53409e0b22d3fc5e1565bf369b327e3b1812123] ...
	I0429 11:37:08.339244 1237617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a7a7309bbe8794b408f94c36d53409e0b22d3fc5e1565bf369b327e3b1812123"
	I0429 11:37:08.417175 1237617 logs.go:123] Gathering logs for etcd [18d8a3169373e74dc7d6856fbd9da3eac329d95311f62caae457e2f407b38092] ...
	I0429 11:37:08.417220 1237617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18d8a3169373e74dc7d6856fbd9da3eac329d95311f62caae457e2f407b38092"
	I0429 11:37:08.469841 1237617 logs.go:123] Gathering logs for coredns [f5aa390616f688ef8f9e965990765e53de3623070f536ce57d6c39292d45b853] ...
	I0429 11:37:08.469875 1237617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5aa390616f688ef8f9e965990765e53de3623070f536ce57d6c39292d45b853"
	I0429 11:37:08.509773 1237617 logs.go:123] Gathering logs for kube-proxy [f6d37371c711ba1179bb1a3cdba8018b5ce1f590a9bd4536719eb0f8757eb648] ...
	I0429 11:37:08.509805 1237617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6d37371c711ba1179bb1a3cdba8018b5ce1f590a9bd4536719eb0f8757eb648"
	I0429 11:37:08.547447 1237617 logs.go:123] Gathering logs for CRI-O ...
	I0429 11:37:08.547477 1237617 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 11:37:08.638499 1237617 logs.go:123] Gathering logs for container status ...
	I0429 11:37:08.638535 1237617 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 11:37:08.699116 1237617 logs.go:123] Gathering logs for kubelet ...
	I0429 11:37:08.699146 1237617 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0429 11:37:08.744110 1237617 logs.go:138] Found kubelet problem: Apr 29 11:35:38 addons-760922 kubelet[1488]: W0429 11:35:38.678625    1488 reflector.go:547] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-760922" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-760922' and this object
	W0429 11:37:08.744338 1237617 logs.go:138] Found kubelet problem: Apr 29 11:35:38 addons-760922 kubelet[1488]: E0429 11:35:38.678665    1488 reflector.go:150] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-760922" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-760922' and this object
	I0429 11:37:08.789893 1237617 out.go:304] Setting ErrFile to fd 2...
	I0429 11:37:08.789921 1237617 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0429 11:37:08.789981 1237617 out.go:239] X Problems detected in kubelet:
	W0429 11:37:08.789995 1237617 out.go:239]   Apr 29 11:35:38 addons-760922 kubelet[1488]: W0429 11:35:38.678625    1488 reflector.go:547] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-760922" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-760922' and this object
	W0429 11:37:08.790003 1237617 out.go:239]   Apr 29 11:35:38 addons-760922 kubelet[1488]: E0429 11:35:38.678665    1488 reflector.go:150] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-760922" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-760922' and this object
	I0429 11:37:08.790013 1237617 out.go:304] Setting ErrFile to fd 2...
	I0429 11:37:08.790019 1237617 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 11:37:18.791207 1237617 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 11:37:18.798871 1237617 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0429 11:37:18.799777 1237617 api_server.go:141] control plane version: v1.30.0
	I0429 11:37:18.799803 1237617 api_server.go:131] duration metric: took 11.109550875s to wait for apiserver health ...
	I0429 11:37:18.799812 1237617 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 11:37:18.799834 1237617 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 11:37:18.799929 1237617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 11:37:18.841038 1237617 cri.go:89] found id: "a7a7309bbe8794b408f94c36d53409e0b22d3fc5e1565bf369b327e3b1812123"
	I0429 11:37:18.841059 1237617 cri.go:89] found id: ""
	I0429 11:37:18.841067 1237617 logs.go:276] 1 containers: [a7a7309bbe8794b408f94c36d53409e0b22d3fc5e1565bf369b327e3b1812123]
	I0429 11:37:18.841129 1237617 ssh_runner.go:195] Run: which crictl
	I0429 11:37:18.844932 1237617 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 11:37:18.845004 1237617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 11:37:18.884960 1237617 cri.go:89] found id: "18d8a3169373e74dc7d6856fbd9da3eac329d95311f62caae457e2f407b38092"
	I0429 11:37:18.884982 1237617 cri.go:89] found id: ""
	I0429 11:37:18.884991 1237617 logs.go:276] 1 containers: [18d8a3169373e74dc7d6856fbd9da3eac329d95311f62caae457e2f407b38092]
	I0429 11:37:18.885056 1237617 ssh_runner.go:195] Run: which crictl
	I0429 11:37:18.888727 1237617 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 11:37:18.888797 1237617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 11:37:18.925797 1237617 cri.go:89] found id: "f5aa390616f688ef8f9e965990765e53de3623070f536ce57d6c39292d45b853"
	I0429 11:37:18.925819 1237617 cri.go:89] found id: ""
	I0429 11:37:18.925827 1237617 logs.go:276] 1 containers: [f5aa390616f688ef8f9e965990765e53de3623070f536ce57d6c39292d45b853]
	I0429 11:37:18.925883 1237617 ssh_runner.go:195] Run: which crictl
	I0429 11:37:18.930000 1237617 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 11:37:18.930073 1237617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 11:37:18.972366 1237617 cri.go:89] found id: "0b4bf008d831035b0b1bdbf378c65024e650de9cc13691a43604af72b5561998"
	I0429 11:37:18.972459 1237617 cri.go:89] found id: ""
	I0429 11:37:18.972482 1237617 logs.go:276] 1 containers: [0b4bf008d831035b0b1bdbf378c65024e650de9cc13691a43604af72b5561998]
	I0429 11:37:18.972542 1237617 ssh_runner.go:195] Run: which crictl
	I0429 11:37:18.977687 1237617 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 11:37:18.977758 1237617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 11:37:19.023118 1237617 cri.go:89] found id: "f6d37371c711ba1179bb1a3cdba8018b5ce1f590a9bd4536719eb0f8757eb648"
	I0429 11:37:19.023143 1237617 cri.go:89] found id: ""
	I0429 11:37:19.023151 1237617 logs.go:276] 1 containers: [f6d37371c711ba1179bb1a3cdba8018b5ce1f590a9bd4536719eb0f8757eb648]
	I0429 11:37:19.023218 1237617 ssh_runner.go:195] Run: which crictl
	I0429 11:37:19.026910 1237617 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 11:37:19.027010 1237617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 11:37:19.065089 1237617 cri.go:89] found id: "836169ee36c10d25a5b392d25a39dc752379d87a719cdb052c530d840b879fb4"
	I0429 11:37:19.065119 1237617 cri.go:89] found id: ""
	I0429 11:37:19.065127 1237617 logs.go:276] 1 containers: [836169ee36c10d25a5b392d25a39dc752379d87a719cdb052c530d840b879fb4]
	I0429 11:37:19.065189 1237617 ssh_runner.go:195] Run: which crictl
	I0429 11:37:19.068812 1237617 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 11:37:19.068889 1237617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 11:37:19.110244 1237617 cri.go:89] found id: "d1c2bca5742231f59a4e7d5ebfca2b0ff8a43e410c36f420cb302b1714fcb172"
	I0429 11:37:19.110267 1237617 cri.go:89] found id: ""
	I0429 11:37:19.110275 1237617 logs.go:276] 1 containers: [d1c2bca5742231f59a4e7d5ebfca2b0ff8a43e410c36f420cb302b1714fcb172]
	I0429 11:37:19.110340 1237617 ssh_runner.go:195] Run: which crictl
	I0429 11:37:19.114048 1237617 logs.go:123] Gathering logs for describe nodes ...
	I0429 11:37:19.114075 1237617 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 11:37:19.259285 1237617 logs.go:123] Gathering logs for kube-apiserver [a7a7309bbe8794b408f94c36d53409e0b22d3fc5e1565bf369b327e3b1812123] ...
	I0429 11:37:19.259356 1237617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a7a7309bbe8794b408f94c36d53409e0b22d3fc5e1565bf369b327e3b1812123"
	I0429 11:37:19.315055 1237617 logs.go:123] Gathering logs for etcd [18d8a3169373e74dc7d6856fbd9da3eac329d95311f62caae457e2f407b38092] ...
	I0429 11:37:19.315098 1237617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18d8a3169373e74dc7d6856fbd9da3eac329d95311f62caae457e2f407b38092"
	I0429 11:37:19.363687 1237617 logs.go:123] Gathering logs for kube-scheduler [0b4bf008d831035b0b1bdbf378c65024e650de9cc13691a43604af72b5561998] ...
	I0429 11:37:19.363718 1237617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b4bf008d831035b0b1bdbf378c65024e650de9cc13691a43604af72b5561998"
	I0429 11:37:19.415781 1237617 logs.go:123] Gathering logs for kube-controller-manager [836169ee36c10d25a5b392d25a39dc752379d87a719cdb052c530d840b879fb4] ...
	I0429 11:37:19.415812 1237617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 836169ee36c10d25a5b392d25a39dc752379d87a719cdb052c530d840b879fb4"
	I0429 11:37:19.482069 1237617 logs.go:123] Gathering logs for CRI-O ...
	I0429 11:37:19.482105 1237617 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 11:37:19.579150 1237617 logs.go:123] Gathering logs for kubelet ...
	I0429 11:37:19.579186 1237617 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0429 11:37:19.608110 1237617 logs.go:138] Found kubelet problem: Apr 29 11:35:38 addons-760922 kubelet[1488]: W0429 11:35:38.678625    1488 reflector.go:547] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-760922" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-760922' and this object
	W0429 11:37:19.608336 1237617 logs.go:138] Found kubelet problem: Apr 29 11:35:38 addons-760922 kubelet[1488]: E0429 11:35:38.678665    1488 reflector.go:150] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-760922" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-760922' and this object
	I0429 11:37:19.666761 1237617 logs.go:123] Gathering logs for dmesg ...
	I0429 11:37:19.666795 1237617 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 11:37:19.686823 1237617 logs.go:123] Gathering logs for container status ...
	I0429 11:37:19.686853 1237617 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 11:37:19.744734 1237617 logs.go:123] Gathering logs for kindnet [d1c2bca5742231f59a4e7d5ebfca2b0ff8a43e410c36f420cb302b1714fcb172] ...
	I0429 11:37:19.744764 1237617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1c2bca5742231f59a4e7d5ebfca2b0ff8a43e410c36f420cb302b1714fcb172"
	I0429 11:37:19.786630 1237617 logs.go:123] Gathering logs for coredns [f5aa390616f688ef8f9e965990765e53de3623070f536ce57d6c39292d45b853] ...
	I0429 11:37:19.786658 1237617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5aa390616f688ef8f9e965990765e53de3623070f536ce57d6c39292d45b853"
	I0429 11:37:19.828486 1237617 logs.go:123] Gathering logs for kube-proxy [f6d37371c711ba1179bb1a3cdba8018b5ce1f590a9bd4536719eb0f8757eb648] ...
	I0429 11:37:19.828515 1237617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6d37371c711ba1179bb1a3cdba8018b5ce1f590a9bd4536719eb0f8757eb648"
	I0429 11:37:19.867452 1237617 out.go:304] Setting ErrFile to fd 2...
	I0429 11:37:19.867476 1237617 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0429 11:37:19.867529 1237617 out.go:239] X Problems detected in kubelet:
	W0429 11:37:19.867542 1237617 out.go:239]   Apr 29 11:35:38 addons-760922 kubelet[1488]: W0429 11:35:38.678625    1488 reflector.go:547] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-760922" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-760922' and this object
	W0429 11:37:19.867551 1237617 out.go:239]   Apr 29 11:35:38 addons-760922 kubelet[1488]: E0429 11:35:38.678665    1488 reflector.go:150] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-760922" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-760922' and this object
	I0429 11:37:19.867561 1237617 out.go:304] Setting ErrFile to fd 2...
	I0429 11:37:19.867567 1237617 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 11:37:29.878356 1237617 system_pods.go:59] 18 kube-system pods found
	I0429 11:37:29.878393 1237617 system_pods.go:61] "coredns-7db6d8ff4d-hsk8z" [a0643984-c7ce-414e-84c3-d69620f28409] Running
	I0429 11:37:29.878400 1237617 system_pods.go:61] "csi-hostpath-attacher-0" [7d2591eb-0cf8-452f-949c-f7df587938a4] Running
	I0429 11:37:29.878404 1237617 system_pods.go:61] "csi-hostpath-resizer-0" [9fbcbd11-6229-4f03-80c9-8211f06eb595] Running
	I0429 11:37:29.878409 1237617 system_pods.go:61] "csi-hostpathplugin-zvs7l" [c7e414f0-e537-4283-acbf-6f4e20086035] Running
	I0429 11:37:29.878413 1237617 system_pods.go:61] "etcd-addons-760922" [cfdfca13-5819-4d23-9247-b019d73ef52a] Running
	I0429 11:37:29.878418 1237617 system_pods.go:61] "kindnet-7gjxl" [2f72207f-2fad-412c-bab0-ce62cfb60658] Running
	I0429 11:37:29.878424 1237617 system_pods.go:61] "kube-apiserver-addons-760922" [c671106e-5d08-4cc2-a7fe-85880f52c9bb] Running
	I0429 11:37:29.878429 1237617 system_pods.go:61] "kube-controller-manager-addons-760922" [cf44cb04-07b0-4d0d-90df-e1e8f4500390] Running
	I0429 11:37:29.878439 1237617 system_pods.go:61] "kube-ingress-dns-minikube" [f32d87a8-ebd8-4285-adce-095ad8ceb09b] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0429 11:37:29.878451 1237617 system_pods.go:61] "kube-proxy-w598j" [5f30f9a6-4dff-4f0a-a330-b3776c8936d1] Running
	I0429 11:37:29.878457 1237617 system_pods.go:61] "kube-scheduler-addons-760922" [a64dc54e-fb5a-4f1f-823d-578fdd3e24a9] Running
	I0429 11:37:29.878464 1237617 system_pods.go:61] "metrics-server-c59844bb4-t8bst" [55fed84e-6197-4372-857c-598bbe503660] Running
	I0429 11:37:29.878468 1237617 system_pods.go:61] "nvidia-device-plugin-daemonset-7lk7c" [68690e1d-7f8a-4423-aaed-674894ca372a] Running
	I0429 11:37:29.878476 1237617 system_pods.go:61] "registry-proxy-m8xkv" [33630f46-f313-4cc6-9d44-213e6df0c519] Running
	I0429 11:37:29.878480 1237617 system_pods.go:61] "registry-tj9l7" [9bb8489a-b110-4d66-afb6-a31def145ada] Running
	I0429 11:37:29.878492 1237617 system_pods.go:61] "snapshot-controller-745499f584-dmhbn" [fe7ceb3f-8331-458c-a043-1cf4a4522c0b] Running
	I0429 11:37:29.878496 1237617 system_pods.go:61] "snapshot-controller-745499f584-rkskn" [e83761c3-e638-4da4-978d-42799f1a45fb] Running
	I0429 11:37:29.878500 1237617 system_pods.go:61] "storage-provisioner" [90f23e9d-3062-4587-bf6d-23d10fd60f3c] Running
	I0429 11:37:29.878506 1237617 system_pods.go:74] duration metric: took 11.078688787s to wait for pod list to return data ...
	I0429 11:37:29.878517 1237617 default_sa.go:34] waiting for default service account to be created ...
	I0429 11:37:29.881038 1237617 default_sa.go:45] found service account: "default"
	I0429 11:37:29.881065 1237617 default_sa.go:55] duration metric: took 2.541352ms for default service account to be created ...
	I0429 11:37:29.881075 1237617 system_pods.go:116] waiting for k8s-apps to be running ...
	I0429 11:37:29.891034 1237617 system_pods.go:86] 18 kube-system pods found
	I0429 11:37:29.891066 1237617 system_pods.go:89] "coredns-7db6d8ff4d-hsk8z" [a0643984-c7ce-414e-84c3-d69620f28409] Running
	I0429 11:37:29.891074 1237617 system_pods.go:89] "csi-hostpath-attacher-0" [7d2591eb-0cf8-452f-949c-f7df587938a4] Running
	I0429 11:37:29.891079 1237617 system_pods.go:89] "csi-hostpath-resizer-0" [9fbcbd11-6229-4f03-80c9-8211f06eb595] Running
	I0429 11:37:29.891100 1237617 system_pods.go:89] "csi-hostpathplugin-zvs7l" [c7e414f0-e537-4283-acbf-6f4e20086035] Running
	I0429 11:37:29.891111 1237617 system_pods.go:89] "etcd-addons-760922" [cfdfca13-5819-4d23-9247-b019d73ef52a] Running
	I0429 11:37:29.891116 1237617 system_pods.go:89] "kindnet-7gjxl" [2f72207f-2fad-412c-bab0-ce62cfb60658] Running
	I0429 11:37:29.891120 1237617 system_pods.go:89] "kube-apiserver-addons-760922" [c671106e-5d08-4cc2-a7fe-85880f52c9bb] Running
	I0429 11:37:29.891125 1237617 system_pods.go:89] "kube-controller-manager-addons-760922" [cf44cb04-07b0-4d0d-90df-e1e8f4500390] Running
	I0429 11:37:29.891134 1237617 system_pods.go:89] "kube-ingress-dns-minikube" [f32d87a8-ebd8-4285-adce-095ad8ceb09b] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0429 11:37:29.891142 1237617 system_pods.go:89] "kube-proxy-w598j" [5f30f9a6-4dff-4f0a-a330-b3776c8936d1] Running
	I0429 11:37:29.891147 1237617 system_pods.go:89] "kube-scheduler-addons-760922" [a64dc54e-fb5a-4f1f-823d-578fdd3e24a9] Running
	I0429 11:37:29.891152 1237617 system_pods.go:89] "metrics-server-c59844bb4-t8bst" [55fed84e-6197-4372-857c-598bbe503660] Running
	I0429 11:37:29.891158 1237617 system_pods.go:89] "nvidia-device-plugin-daemonset-7lk7c" [68690e1d-7f8a-4423-aaed-674894ca372a] Running
	I0429 11:37:29.891165 1237617 system_pods.go:89] "registry-proxy-m8xkv" [33630f46-f313-4cc6-9d44-213e6df0c519] Running
	I0429 11:37:29.891179 1237617 system_pods.go:89] "registry-tj9l7" [9bb8489a-b110-4d66-afb6-a31def145ada] Running
	I0429 11:37:29.891182 1237617 system_pods.go:89] "snapshot-controller-745499f584-dmhbn" [fe7ceb3f-8331-458c-a043-1cf4a4522c0b] Running
	I0429 11:37:29.891187 1237617 system_pods.go:89] "snapshot-controller-745499f584-rkskn" [e83761c3-e638-4da4-978d-42799f1a45fb] Running
	I0429 11:37:29.891192 1237617 system_pods.go:89] "storage-provisioner" [90f23e9d-3062-4587-bf6d-23d10fd60f3c] Running
	I0429 11:37:29.891202 1237617 system_pods.go:126] duration metric: took 10.121403ms to wait for k8s-apps to be running ...
	I0429 11:37:29.891213 1237617 system_svc.go:44] waiting for kubelet service to be running ....
	I0429 11:37:29.891275 1237617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 11:37:29.906267 1237617 system_svc.go:56] duration metric: took 15.043722ms WaitForService to wait for kubelet
	I0429 11:37:29.906302 1237617 kubeadm.go:576] duration metric: took 2m24.265124891s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 11:37:29.906323 1237617 node_conditions.go:102] verifying NodePressure condition ...
	I0429 11:37:29.909599 1237617 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0429 11:37:29.909631 1237617 node_conditions.go:123] node cpu capacity is 2
	I0429 11:37:29.909647 1237617 node_conditions.go:105] duration metric: took 3.318742ms to run NodePressure ...
	I0429 11:37:29.909660 1237617 start.go:240] waiting for startup goroutines ...
	I0429 11:37:29.909674 1237617 start.go:245] waiting for cluster config update ...
	I0429 11:37:29.909694 1237617 start.go:254] writing updated cluster config ...
	I0429 11:37:29.910040 1237617 ssh_runner.go:195] Run: rm -f paused
	I0429 11:37:30.273170 1237617 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0429 11:37:30.275761 1237617 out.go:177] * Done! kubectl is now configured to use "addons-760922" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 29 11:41:51 addons-760922 crio[911]: time="2024-04-29 11:41:51.784688045Z" level=info msg="Stopped pod sandbox (already stopped): 23f51ea97559d63b5d6f760b116b24ca09caeb5b02403a3b9667478e1243ab25" id=c24c351b-6d9a-45b1-b6cd-c34e9160032b name=/runtime.v1.RuntimeService/StopPodSandbox
	Apr 29 11:41:51 addons-760922 crio[911]: time="2024-04-29 11:41:51.784985454Z" level=info msg="Removing pod sandbox: 23f51ea97559d63b5d6f760b116b24ca09caeb5b02403a3b9667478e1243ab25" id=6927930d-18e2-47b4-ac97-6854fa03839c name=/runtime.v1.RuntimeService/RemovePodSandbox
	Apr 29 11:41:51 addons-760922 crio[911]: time="2024-04-29 11:41:51.793580052Z" level=info msg="Removed pod sandbox: 23f51ea97559d63b5d6f760b116b24ca09caeb5b02403a3b9667478e1243ab25" id=6927930d-18e2-47b4-ac97-6854fa03839c name=/runtime.v1.RuntimeService/RemovePodSandbox
	Apr 29 11:41:51 addons-760922 crio[911]: time="2024-04-29 11:41:51.794071684Z" level=info msg="Stopping pod sandbox: 977f327676cc317b4b14ab18d6c5053493d8cfa8d5b68f0fc001c19f20943de9" id=ca06e81e-81fb-4951-86cc-ec75edb1ce8f name=/runtime.v1.RuntimeService/StopPodSandbox
	Apr 29 11:41:51 addons-760922 crio[911]: time="2024-04-29 11:41:51.794108394Z" level=info msg="Stopped pod sandbox (already stopped): 977f327676cc317b4b14ab18d6c5053493d8cfa8d5b68f0fc001c19f20943de9" id=ca06e81e-81fb-4951-86cc-ec75edb1ce8f name=/runtime.v1.RuntimeService/StopPodSandbox
	Apr 29 11:41:51 addons-760922 crio[911]: time="2024-04-29 11:41:51.794474718Z" level=info msg="Removing pod sandbox: 977f327676cc317b4b14ab18d6c5053493d8cfa8d5b68f0fc001c19f20943de9" id=59101364-3352-491c-9f83-99f67a2e736f name=/runtime.v1.RuntimeService/RemovePodSandbox
	Apr 29 11:41:51 addons-760922 crio[911]: time="2024-04-29 11:41:51.803698178Z" level=info msg="Removed pod sandbox: 977f327676cc317b4b14ab18d6c5053493d8cfa8d5b68f0fc001c19f20943de9" id=59101364-3352-491c-9f83-99f67a2e736f name=/runtime.v1.RuntimeService/RemovePodSandbox
	Apr 29 11:42:34 addons-760922 crio[911]: time="2024-04-29 11:42:34.299524267Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=8cfa78a9-a21f-4b7b-88ad-916a62c3185c name=/runtime.v1.ImageService/ImageStatus
	Apr 29 11:42:34 addons-760922 crio[911]: time="2024-04-29 11:42:34.299750473Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7],Size_:28999827,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=8cfa78a9-a21f-4b7b-88ad-916a62c3185c name=/runtime.v1.ImageService/ImageStatus
	Apr 29 11:42:34 addons-760922 crio[911]: time="2024-04-29 11:42:34.300405591Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=1a8c0f19-e3bf-440f-b7d6-62a2f03f3560 name=/runtime.v1.ImageService/ImageStatus
	Apr 29 11:42:34 addons-760922 crio[911]: time="2024-04-29 11:42:34.300598042Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7],Size_:28999827,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=1a8c0f19-e3bf-440f-b7d6-62a2f03f3560 name=/runtime.v1.ImageService/ImageStatus
	Apr 29 11:42:34 addons-760922 crio[911]: time="2024-04-29 11:42:34.301399637Z" level=info msg="Creating container: default/hello-world-app-86c47465fc-cppn7/hello-world-app" id=be060f40-c5c2-49a2-bf73-419dda271f98 name=/runtime.v1.RuntimeService/CreateContainer
	Apr 29 11:42:34 addons-760922 crio[911]: time="2024-04-29 11:42:34.301512932Z" level=warning msg="Allowed annotations are specified for workload []"
	Apr 29 11:42:34 addons-760922 crio[911]: time="2024-04-29 11:42:34.368851087Z" level=info msg="Created container 7e48b3e793dbc25609676f1fb73b5fef2f5415848c08afab2f345f0260d42894: default/hello-world-app-86c47465fc-cppn7/hello-world-app" id=be060f40-c5c2-49a2-bf73-419dda271f98 name=/runtime.v1.RuntimeService/CreateContainer
	Apr 29 11:42:34 addons-760922 crio[911]: time="2024-04-29 11:42:34.369744702Z" level=info msg="Starting container: 7e48b3e793dbc25609676f1fb73b5fef2f5415848c08afab2f345f0260d42894" id=8295c3c3-8a74-48fa-8739-bc96f5ff5ea0 name=/runtime.v1.RuntimeService/StartContainer
	Apr 29 11:42:34 addons-760922 crio[911]: time="2024-04-29 11:42:34.377284604Z" level=info msg="Started container" PID=8684 containerID=7e48b3e793dbc25609676f1fb73b5fef2f5415848c08afab2f345f0260d42894 description=default/hello-world-app-86c47465fc-cppn7/hello-world-app id=8295c3c3-8a74-48fa-8739-bc96f5ff5ea0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3e6b9e6d3b0be04b03da4fe4cb9494943b762e9214f9be7413dbf20d468233ee
	Apr 29 11:42:34 addons-760922 conmon[8673]: conmon 7e48b3e793dbc2560967 <ninfo>: container 8684 exited with status 1
	Apr 29 11:42:34 addons-760922 crio[911]: time="2024-04-29 11:42:34.590328384Z" level=info msg="Removing container: 0259425c4e8e20fc55029215de2177a0a2db1a48d5369e453d3ed3f89f7670d3" id=d7f68ba7-580c-42d0-909b-1f6344218193 name=/runtime.v1.RuntimeService/RemoveContainer
	Apr 29 11:42:34 addons-760922 crio[911]: time="2024-04-29 11:42:34.609529705Z" level=info msg="Removed container 0259425c4e8e20fc55029215de2177a0a2db1a48d5369e453d3ed3f89f7670d3: default/hello-world-app-86c47465fc-cppn7/hello-world-app" id=d7f68ba7-580c-42d0-909b-1f6344218193 name=/runtime.v1.RuntimeService/RemoveContainer
	Apr 29 11:43:44 addons-760922 crio[911]: time="2024-04-29 11:43:44.962733191Z" level=info msg="Stopping container: 052392dd63f2aa05948d04f335570315575ff7676cd7411497c545d8174bfcc1 (timeout: 30s)" id=fd126a6b-9d64-44fc-a1c9-45779d6df8c6 name=/runtime.v1.RuntimeService/StopContainer
	Apr 29 11:43:46 addons-760922 crio[911]: time="2024-04-29 11:43:46.123437639Z" level=info msg="Stopped container 052392dd63f2aa05948d04f335570315575ff7676cd7411497c545d8174bfcc1: kube-system/metrics-server-c59844bb4-t8bst/metrics-server" id=fd126a6b-9d64-44fc-a1c9-45779d6df8c6 name=/runtime.v1.RuntimeService/StopContainer
	Apr 29 11:43:46 addons-760922 crio[911]: time="2024-04-29 11:43:46.124292879Z" level=info msg="Stopping pod sandbox: 2b42b7d5388f5287759b3f4c28f7db4f9dfbf570f9bd91f922dc5630e536b106" id=a4f26223-14b0-4315-a4b6-802771c32706 name=/runtime.v1.RuntimeService/StopPodSandbox
	Apr 29 11:43:46 addons-760922 crio[911]: time="2024-04-29 11:43:46.124601587Z" level=info msg="Got pod network &{Name:metrics-server-c59844bb4-t8bst Namespace:kube-system ID:2b42b7d5388f5287759b3f4c28f7db4f9dfbf570f9bd91f922dc5630e536b106 UID:55fed84e-6197-4372-857c-598bbe503660 NetNS:/var/run/netns/f824c0f1-578c-4339-9f62-0c74269948ad Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Apr 29 11:43:46 addons-760922 crio[911]: time="2024-04-29 11:43:46.124844950Z" level=info msg="Deleting pod kube-system_metrics-server-c59844bb4-t8bst from CNI network \"kindnet\" (type=ptp)"
	Apr 29 11:43:46 addons-760922 crio[911]: time="2024-04-29 11:43:46.161574093Z" level=info msg="Stopped pod sandbox: 2b42b7d5388f5287759b3f4c28f7db4f9dfbf570f9bd91f922dc5630e536b106" id=a4f26223-14b0-4315-a4b6-802771c32706 name=/runtime.v1.RuntimeService/StopPodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	7e48b3e793dbc       dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79                                                        About a minute ago   Exited              hello-world-app           4                   3e6b9e6d3b0be       hello-world-app-86c47465fc-cppn7
	5dcda6f95d8c2       docker.io/library/nginx@sha256:1f37baf7373d386ee9de0437325ae3e0202a3959803fd79144fa0bb27e2b2801                         5 minutes ago        Running             nginx                     0                   3d47ec3f5ea3c       nginx
	ff63c37cdf5c6       ghcr.io/headlamp-k8s/headlamp@sha256:1f277f42730106526a27560517a4c5f9253ccb2477be458986f44a791158a02c                   5 minutes ago        Running             headlamp                  0                   3bd7926cee370       headlamp-7559bf459f-8k4nc
	27a8b2ead9827       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:a40e1a121ee367d1712ac3a54ec9c38c405a65dde923c98e5fa6368fa82c4b69            7 minutes ago        Running             gcp-auth                  0                   b3fcccd59b9ee       gcp-auth-5db96cd9b4-ls88f
	972973edd7a4f       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                         7 minutes ago        Running             yakd                      0                   cdcaa68701c7d       yakd-dashboard-5ddbf7d777-jjvnb
	052392dd63f2a       registry.k8s.io/metrics-server/metrics-server@sha256:7f0fc3565b6d4655d078bb8e250d0423d7c79aeb05fbc71e1ffa6ff664264d70   7 minutes ago        Exited              metrics-server            0                   2b42b7d5388f5       metrics-server-c59844bb4-t8bst
	2889233ebbfde       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98        8 minutes ago        Running             local-path-provisioner    0                   00dc4dc432ca7       local-path-provisioner-8d985888d-jvmjz
	ea33593fdd912       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                        8 minutes ago        Running             storage-provisioner       0                   0095105c319d5       storage-provisioner
	f5aa390616f68       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93                                                        8 minutes ago        Running             coredns                   0                   f10b1db56a3bb       coredns-7db6d8ff4d-hsk8z
	f6d37371c711b       cb7eac0b42cc1efe8ef8d69652c7c0babbf9ab418daca7fe90ddb8b1ab68389f                                                        8 minutes ago        Running             kube-proxy                0                   f3188134a23b4       kube-proxy-w598j
	d1c2bca574223       4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d                                                        8 minutes ago        Running             kindnet-cni               0                   36a1c944f17c0       kindnet-7gjxl
	18d8a3169373e       014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd                                                        9 minutes ago        Running             etcd                      0                   8ddcb5be7043b       etcd-addons-760922
	0b4bf008d8310       547adae34140be47cdc0d9f3282b6184ef76154c44cf43fc7edd0685e61ab73a                                                        9 minutes ago        Running             kube-scheduler            0                   8eab0f38d946a       kube-scheduler-addons-760922
	836169ee36c10       68feac521c0f104bef927614ce0960d6fcddf98bd42f039c98b7d4a82294d6f1                                                        9 minutes ago        Running             kube-controller-manager   0                   bd9885044a0ff       kube-controller-manager-addons-760922
	a7a7309bbe879       181f57fd3cdb796d3b94d5a1c86bf48ec261d75965d1b7c328f1d7c11f79f0bb                                                        9 minutes ago        Running             kube-apiserver            0                   269a5fc33ea83       kube-apiserver-addons-760922
	
	
	==> coredns [f5aa390616f688ef8f9e965990765e53de3623070f536ce57d6c39292d45b853] <==
	[INFO] 10.244.0.19:41444 - 21642 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000059799s
	[INFO] 10.244.0.19:41444 - 47992 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.004544942s
	[INFO] 10.244.0.19:42204 - 60956 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00595076s
	[INFO] 10.244.0.19:41444 - 31957 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001960933s
	[INFO] 10.244.0.19:41444 - 39504 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000117226s
	[INFO] 10.244.0.19:42204 - 40731 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001193528s
	[INFO] 10.244.0.19:42204 - 26944 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000068275s
	[INFO] 10.244.0.19:58214 - 34023 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000181127s
	[INFO] 10.244.0.19:47300 - 63375 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000168368s
	[INFO] 10.244.0.19:47300 - 55534 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000080795s
	[INFO] 10.244.0.19:58214 - 16367 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000043413s
	[INFO] 10.244.0.19:47300 - 29214 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000074839s
	[INFO] 10.244.0.19:47300 - 17173 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000060258s
	[INFO] 10.244.0.19:58214 - 49839 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000049682s
	[INFO] 10.244.0.19:47300 - 5093 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00005682s
	[INFO] 10.244.0.19:58214 - 53864 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000055376s
	[INFO] 10.244.0.19:47300 - 2823 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000094883s
	[INFO] 10.244.0.19:58214 - 51116 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000058757s
	[INFO] 10.244.0.19:58214 - 41016 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000053809s
	[INFO] 10.244.0.19:47300 - 28938 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001655909s
	[INFO] 10.244.0.19:58214 - 17059 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001247837s
	[INFO] 10.244.0.19:47300 - 55826 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001225109s
	[INFO] 10.244.0.19:47300 - 28929 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000073723s
	[INFO] 10.244.0.19:58214 - 22885 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001166976s
	[INFO] 10.244.0.19:58214 - 36751 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000047664s
	
	
	==> describe nodes <==
	Name:               addons-760922
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-760922
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8dbdf0965c47dd73171cfc89e1a9c75505f7a22d
	                    minikube.k8s.io/name=addons-760922
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_29T11_34_52_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-760922
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 11:34:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-760922
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 11:43:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 11:41:30 +0000   Mon, 29 Apr 2024 11:34:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 11:41:30 +0000   Mon, 29 Apr 2024 11:34:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 11:41:30 +0000   Mon, 29 Apr 2024 11:34:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 11:41:30 +0000   Mon, 29 Apr 2024 11:35:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-760922
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	System Info:
	  Machine ID:                 6400e2c8e40e41409106f64aaa4c5941
	  System UUID:                3b9fc9c3-4dee-4215-86b3-ff01ad01914f
	  Boot ID:                    b8f2360a-0b19-4e04-aa8c-604719eae8f1
	  Kernel Version:             5.15.0-1058-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                      ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-86c47465fc-cppn7          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m46s
	  default                     nginx                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m6s
	  gcp-auth                    gcp-auth-5db96cd9b4-ls88f                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m31s
	  headlamp                    headlamp-7559bf459f-8k4nc                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m30s
	  kube-system                 coredns-7db6d8ff4d-hsk8z                  100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     8m41s
	  kube-system                 etcd-addons-760922                        100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         8m55s
	  kube-system                 kindnet-7gjxl                             100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      8m41s
	  kube-system                 kube-apiserver-addons-760922              250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m55s
	  kube-system                 kube-controller-manager-addons-760922     200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m55s
	  kube-system                 kube-proxy-w598j                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m41s
	  kube-system                 kube-scheduler-addons-760922              100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m55s
	  kube-system                 storage-provisioner                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m36s
	  local-path-storage          local-path-provisioner-8d985888d-jvmjz    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m36s
	  yakd-dashboard              yakd-dashboard-5ddbf7d777-jjvnb           0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     8m36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             348Mi (4%!)(MISSING)  476Mi (6%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 8m36s                kube-proxy       
	  Normal  Starting                 9m2s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m2s (x8 over 9m2s)  kubelet          Node addons-760922 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m2s (x8 over 9m2s)  kubelet          Node addons-760922 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m2s (x8 over 9m2s)  kubelet          Node addons-760922 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m55s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m55s                kubelet          Node addons-760922 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m55s                kubelet          Node addons-760922 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m55s                kubelet          Node addons-760922 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           8m41s                node-controller  Node addons-760922 event: Registered Node addons-760922 in Controller
	  Normal  NodeReady                8m8s                 kubelet          Node addons-760922 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.001075] FS-Cache: O-key=[8] 'c63e5c0100000000'
	[  +0.000727] FS-Cache: N-cookie c=00000042 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000936] FS-Cache: N-cookie d=00000000a8cdaa2f{9p.inode} n=000000002cddfdcd
	[  +0.001167] FS-Cache: N-key=[8] 'c63e5c0100000000'
	[  +0.002652] FS-Cache: Duplicate cookie detected
	[  +0.000752] FS-Cache: O-cookie c=0000003c [p=00000039 fl=226 nc=0 na=1]
	[  +0.000953] FS-Cache: O-cookie d=00000000a8cdaa2f{9p.inode} n=00000000b5edf900
	[  +0.001050] FS-Cache: O-key=[8] 'c63e5c0100000000'
	[  +0.000708] FS-Cache: N-cookie c=00000043 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000980] FS-Cache: N-cookie d=00000000a8cdaa2f{9p.inode} n=000000006f4e83dc
	[  +0.001064] FS-Cache: N-key=[8] 'c63e5c0100000000'
	[  +3.263862] FS-Cache: Duplicate cookie detected
	[  +0.000761] FS-Cache: O-cookie c=0000003a [p=00000039 fl=226 nc=0 na=1]
	[  +0.000990] FS-Cache: O-cookie d=00000000a8cdaa2f{9p.inode} n=000000005910e427
	[  +0.001043] FS-Cache: O-key=[8] 'c53e5c0100000000'
	[  +0.000806] FS-Cache: N-cookie c=00000045 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000969] FS-Cache: N-cookie d=00000000a8cdaa2f{9p.inode} n=00000000c61f21fc
	[  +0.001058] FS-Cache: N-key=[8] 'c53e5c0100000000'
	[  +0.258281] FS-Cache: Duplicate cookie detected
	[  +0.000808] FS-Cache: O-cookie c=0000003f [p=00000039 fl=226 nc=0 na=1]
	[  +0.000970] FS-Cache: O-cookie d=00000000a8cdaa2f{9p.inode} n=00000000820f26a0
	[  +0.001043] FS-Cache: O-key=[8] 'cb3e5c0100000000'
	[  +0.000760] FS-Cache: N-cookie c=00000046 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000937] FS-Cache: N-cookie d=00000000a8cdaa2f{9p.inode} n=00000000bb4b91ea
	[  +0.001031] FS-Cache: N-key=[8] 'cb3e5c0100000000'
	
	
	==> etcd [18d8a3169373e74dc7d6856fbd9da3eac329d95311f62caae457e2f407b38092] <==
	{"level":"info","ts":"2024-04-29T11:34:44.834905Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-29T11:34:45.796721Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-04-29T11:34:45.796836Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-04-29T11:34:45.796882Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-04-29T11:34:45.796936Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-04-29T11:34:45.796967Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-04-29T11:34:45.797003Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-04-29T11:34:45.797036Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-04-29T11:34:45.799049Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-760922 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-29T11:34:45.799128Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T11:34:45.799439Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T11:34:45.799556Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T11:34:45.806367Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-29T11:34:45.808427Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T11:34:45.808573Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T11:34:45.809481Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T11:34:45.830421Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-04-29T11:34:45.836753Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-29T11:34:45.83684Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-29T11:35:07.308711Z","caller":"traceutil/trace.go:171","msg":"trace[1688005631] transaction","detail":"{read_only:false; response_revision:392; number_of_response:1; }","duration":"386.89247ms","start":"2024-04-29T11:35:06.921796Z","end":"2024-04-29T11:35:07.308689Z","steps":["trace[1688005631] 'process raft request'  (duration: 341.270412ms)","trace[1688005631] 'compare'  (duration: 45.39106ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-29T11:35:07.310163Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T11:35:06.921776Z","time spent":"388.004488ms","remote":"127.0.0.1:59276","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":707,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/kindnet-7gjxl.17cabd1530c7590e\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kindnet-7gjxl.17cabd1530c7590e\" value_size:630 lease:8128028836484299275 >> failure:<>"}
	{"level":"info","ts":"2024-04-29T11:35:07.31065Z","caller":"traceutil/trace.go:171","msg":"trace[860246016] transaction","detail":"{read_only:false; response_revision:393; number_of_response:1; }","duration":"138.081324ms","start":"2024-04-29T11:35:07.172555Z","end":"2024-04-29T11:35:07.310636Z","steps":["trace[860246016] 'process raft request'  (duration: 136.048851ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T11:35:09.244469Z","caller":"traceutil/trace.go:171","msg":"trace[1895410619] transaction","detail":"{read_only:false; response_revision:406; number_of_response:1; }","duration":"135.287222ms","start":"2024-04-29T11:35:09.109167Z","end":"2024-04-29T11:35:09.244454Z","steps":["trace[1895410619] 'process raft request'  (duration: 48.88677ms)","trace[1895410619] 'compare'  (duration: 86.224642ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-29T11:35:09.945871Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.808979ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-760922\" ","response":"range_response_count:1 size:5744"}
	{"level":"info","ts":"2024-04-29T11:35:09.946005Z","caller":"traceutil/trace.go:171","msg":"trace[346994486] range","detail":"{range_begin:/registry/minions/addons-760922; range_end:; response_count:1; response_revision:461; }","duration":"107.950918ms","start":"2024-04-29T11:35:09.838041Z","end":"2024-04-29T11:35:09.945992Z","steps":["trace[346994486] 'agreement among raft nodes before linearized reading'  (duration: 91.782387ms)","trace[346994486] 'get authentication metadata'  (duration: 15.96452ms)"],"step_count":2}
	
	
	==> gcp-auth [27a8b2ead9827a1abec94db6cbca613a00f45bbccfdbe9469ca8ac31e5fe2e4f] <==
	2024/04/29 11:36:45 GCP Auth Webhook started!
	2024/04/29 11:37:39 Ready to marshal response ...
	2024/04/29 11:37:39 Ready to write response ...
	2024/04/29 11:37:41 Ready to marshal response ...
	2024/04/29 11:37:41 Ready to write response ...
	2024/04/29 11:38:00 Ready to marshal response ...
	2024/04/29 11:38:00 Ready to write response ...
	2024/04/29 11:38:00 Ready to marshal response ...
	2024/04/29 11:38:00 Ready to write response ...
	2024/04/29 11:38:05 Ready to marshal response ...
	2024/04/29 11:38:05 Ready to write response ...
	2024/04/29 11:38:08 Ready to marshal response ...
	2024/04/29 11:38:08 Ready to write response ...
	2024/04/29 11:38:16 Ready to marshal response ...
	2024/04/29 11:38:16 Ready to write response ...
	2024/04/29 11:38:16 Ready to marshal response ...
	2024/04/29 11:38:16 Ready to write response ...
	2024/04/29 11:38:16 Ready to marshal response ...
	2024/04/29 11:38:16 Ready to write response ...
	2024/04/29 11:38:40 Ready to marshal response ...
	2024/04/29 11:38:40 Ready to write response ...
	2024/04/29 11:41:00 Ready to marshal response ...
	2024/04/29 11:41:00 Ready to write response ...
	
	
	==> kernel <==
	 11:43:46 up  7:26,  0 users,  load average: 0.08, 0.79, 2.15
	Linux addons-760922 5.15.0-1058-aws #64~20.04.1-Ubuntu SMP Tue Apr 9 11:11:55 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [d1c2bca5742231f59a4e7d5ebfca2b0ff8a43e410c36f420cb302b1714fcb172] <==
	I0429 11:41:38.559845       1 main.go:227] handling current node
	I0429 11:41:48.564829       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0429 11:41:48.564858       1 main.go:227] handling current node
	I0429 11:41:58.568658       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0429 11:41:58.568712       1 main.go:227] handling current node
	I0429 11:42:08.572146       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0429 11:42:08.572177       1 main.go:227] handling current node
	I0429 11:42:18.575706       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0429 11:42:18.575735       1 main.go:227] handling current node
	I0429 11:42:28.580797       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0429 11:42:28.580903       1 main.go:227] handling current node
	I0429 11:42:38.585361       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0429 11:42:38.585388       1 main.go:227] handling current node
	I0429 11:42:48.595669       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0429 11:42:48.595695       1 main.go:227] handling current node
	I0429 11:42:58.599276       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0429 11:42:58.599302       1 main.go:227] handling current node
	I0429 11:43:08.611303       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0429 11:43:08.611331       1 main.go:227] handling current node
	I0429 11:43:18.615598       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0429 11:43:18.615625       1 main.go:227] handling current node
	I0429 11:43:28.627344       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0429 11:43:28.627368       1 main.go:227] handling current node
	I0429 11:43:38.631453       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0429 11:43:38.631482       1 main.go:227] handling current node
	
	
	==> kube-apiserver [a7a7309bbe8794b408f94c36d53409e0b22d3fc5e1565bf369b327e3b1812123] <==
	E0429 11:36:56.351351       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.55.5:443/apis/metrics.k8s.io/v1beta1: Get "https://10.103.55.5:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.103.55.5:443: connect: connection refused
	W0429 11:36:56.351803       1 handler_proxy.go:93] no RequestInfo found in the context
	E0429 11:36:56.351873       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0429 11:36:56.404005       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again
	I0429 11:36:56.410153       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0429 11:37:51.873787       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0429 11:37:53.872597       1 watch.go:250] http2: stream closed
	I0429 11:38:16.760954       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.98.54.230"}
	I0429 11:38:21.658424       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0429 11:38:21.658473       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0429 11:38:21.692154       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0429 11:38:21.692282       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0429 11:38:21.729677       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0429 11:38:21.729892       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0429 11:38:21.804127       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0429 11:38:21.804169       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0429 11:38:22.754595       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0429 11:38:22.804654       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0429 11:38:22.831004       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0429 11:38:34.353320       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0429 11:38:35.380027       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0429 11:38:39.871105       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0429 11:38:40.212402       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.99.250.226"}
	I0429 11:41:01.069296       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.111.113.32"}
	
	
	==> kube-controller-manager [836169ee36c10d25a5b392d25a39dc752379d87a719cdb052c530d840b879fb4] <==
	E0429 11:41:59.789063       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0429 11:42:00.314440       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="49.945µs"
	W0429 11:42:07.111693       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0429 11:42:07.111733       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0429 11:42:10.546296       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0429 11:42:10.546334       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0429 11:42:35.606932       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="42.059µs"
	W0429 11:42:37.786880       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0429 11:42:37.786918       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0429 11:42:38.365556       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0429 11:42:38.365600       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0429 11:42:39.039947       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0429 11:42:39.039989       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0429 11:42:44.551501       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0429 11:42:44.551540       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0429 11:42:47.311554       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="62.843µs"
	W0429 11:43:14.305178       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0429 11:43:14.305216       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0429 11:43:14.898965       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0429 11:43:14.899004       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0429 11:43:26.762680       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0429 11:43:26.762718       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0429 11:43:29.909782       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0429 11:43:29.909818       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0429 11:43:44.944200       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-c59844bb4" duration="18.847µs"
	
	
	==> kube-proxy [f6d37371c711ba1179bb1a3cdba8018b5ce1f590a9bd4536719eb0f8757eb648] <==
	I0429 11:35:09.640055       1 server_linux.go:69] "Using iptables proxy"
	I0429 11:35:10.229193       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0429 11:35:10.401417       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0429 11:35:10.401471       1 server_linux.go:165] "Using iptables Proxier"
	I0429 11:35:10.403953       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0429 11:35:10.403985       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0429 11:35:10.404008       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 11:35:10.404197       1 server.go:872] "Version info" version="v1.30.0"
	I0429 11:35:10.404218       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 11:35:10.408579       1 config.go:192] "Starting service config controller"
	I0429 11:35:10.408610       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 11:35:10.408648       1 config.go:101] "Starting endpoint slice config controller"
	I0429 11:35:10.408652       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 11:35:10.409182       1 config.go:319] "Starting node config controller"
	I0429 11:35:10.409998       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 11:35:10.516783       1 shared_informer.go:320] Caches are synced for node config
	I0429 11:35:10.517566       1 shared_informer.go:320] Caches are synced for service config
	I0429 11:35:10.517592       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [0b4bf008d831035b0b1bdbf378c65024e650de9cc13691a43604af72b5561998] <==
	W0429 11:34:49.174032       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0429 11:34:49.174070       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0429 11:34:49.174142       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0429 11:34:49.174183       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0429 11:34:49.174317       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0429 11:34:49.174361       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0429 11:34:49.174468       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0429 11:34:49.174508       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0429 11:34:49.174577       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0429 11:34:49.174614       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0429 11:34:49.174687       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0429 11:34:49.174733       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0429 11:34:49.175159       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0429 11:34:49.175225       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0429 11:34:49.175300       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0429 11:34:49.175341       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0429 11:34:49.175445       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0429 11:34:49.177258       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0429 11:34:49.176803       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0429 11:34:49.177391       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0429 11:34:49.176844       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0429 11:34:49.177492       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0429 11:34:49.990611       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0429 11:34:49.990654       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0429 11:34:50.667853       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 29 11:42:23 addons-760922 kubelet[1488]: E0429 11:42:23.299284    1488 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 40s restarting failed container=hello-world-app pod=hello-world-app-86c47465fc-cppn7_default(a52dcf22-6930-49b0-a486-c54c89f75497)\"" pod="default/hello-world-app-86c47465fc-cppn7" podUID="a52dcf22-6930-49b0-a486-c54c89f75497"
	Apr 29 11:42:34 addons-760922 kubelet[1488]: I0429 11:42:34.298961    1488 scope.go:117] "RemoveContainer" containerID="0259425c4e8e20fc55029215de2177a0a2db1a48d5369e453d3ed3f89f7670d3"
	Apr 29 11:42:34 addons-760922 kubelet[1488]: I0429 11:42:34.589317    1488 scope.go:117] "RemoveContainer" containerID="0259425c4e8e20fc55029215de2177a0a2db1a48d5369e453d3ed3f89f7670d3"
	Apr 29 11:42:35 addons-760922 kubelet[1488]: I0429 11:42:35.592476    1488 scope.go:117] "RemoveContainer" containerID="7e48b3e793dbc25609676f1fb73b5fef2f5415848c08afab2f345f0260d42894"
	Apr 29 11:42:35 addons-760922 kubelet[1488]: E0429 11:42:35.592769    1488 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=hello-world-app pod=hello-world-app-86c47465fc-cppn7_default(a52dcf22-6930-49b0-a486-c54c89f75497)\"" pod="default/hello-world-app-86c47465fc-cppn7" podUID="a52dcf22-6930-49b0-a486-c54c89f75497"
	Apr 29 11:42:47 addons-760922 kubelet[1488]: I0429 11:42:47.298933    1488 scope.go:117] "RemoveContainer" containerID="7e48b3e793dbc25609676f1fb73b5fef2f5415848c08afab2f345f0260d42894"
	Apr 29 11:42:47 addons-760922 kubelet[1488]: E0429 11:42:47.299231    1488 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=hello-world-app pod=hello-world-app-86c47465fc-cppn7_default(a52dcf22-6930-49b0-a486-c54c89f75497)\"" pod="default/hello-world-app-86c47465fc-cppn7" podUID="a52dcf22-6930-49b0-a486-c54c89f75497"
	Apr 29 11:43:01 addons-760922 kubelet[1488]: I0429 11:43:01.298735    1488 scope.go:117] "RemoveContainer" containerID="7e48b3e793dbc25609676f1fb73b5fef2f5415848c08afab2f345f0260d42894"
	Apr 29 11:43:01 addons-760922 kubelet[1488]: E0429 11:43:01.299026    1488 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=hello-world-app pod=hello-world-app-86c47465fc-cppn7_default(a52dcf22-6930-49b0-a486-c54c89f75497)\"" pod="default/hello-world-app-86c47465fc-cppn7" podUID="a52dcf22-6930-49b0-a486-c54c89f75497"
	Apr 29 11:43:12 addons-760922 kubelet[1488]: I0429 11:43:12.298877    1488 scope.go:117] "RemoveContainer" containerID="7e48b3e793dbc25609676f1fb73b5fef2f5415848c08afab2f345f0260d42894"
	Apr 29 11:43:12 addons-760922 kubelet[1488]: E0429 11:43:12.299172    1488 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=hello-world-app pod=hello-world-app-86c47465fc-cppn7_default(a52dcf22-6930-49b0-a486-c54c89f75497)\"" pod="default/hello-world-app-86c47465fc-cppn7" podUID="a52dcf22-6930-49b0-a486-c54c89f75497"
	Apr 29 11:43:27 addons-760922 kubelet[1488]: I0429 11:43:27.299701    1488 scope.go:117] "RemoveContainer" containerID="7e48b3e793dbc25609676f1fb73b5fef2f5415848c08afab2f345f0260d42894"
	Apr 29 11:43:27 addons-760922 kubelet[1488]: E0429 11:43:27.299935    1488 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=hello-world-app pod=hello-world-app-86c47465fc-cppn7_default(a52dcf22-6930-49b0-a486-c54c89f75497)\"" pod="default/hello-world-app-86c47465fc-cppn7" podUID="a52dcf22-6930-49b0-a486-c54c89f75497"
	Apr 29 11:43:40 addons-760922 kubelet[1488]: I0429 11:43:40.299332    1488 scope.go:117] "RemoveContainer" containerID="7e48b3e793dbc25609676f1fb73b5fef2f5415848c08afab2f345f0260d42894"
	Apr 29 11:43:40 addons-760922 kubelet[1488]: E0429 11:43:40.299624    1488 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=hello-world-app pod=hello-world-app-86c47465fc-cppn7_default(a52dcf22-6930-49b0-a486-c54c89f75497)\"" pod="default/hello-world-app-86c47465fc-cppn7" podUID="a52dcf22-6930-49b0-a486-c54c89f75497"
	Apr 29 11:43:46 addons-760922 kubelet[1488]: I0429 11:43:46.291881    1488 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/55fed84e-6197-4372-857c-598bbe503660-tmp-dir\") pod \"55fed84e-6197-4372-857c-598bbe503660\" (UID: \"55fed84e-6197-4372-857c-598bbe503660\") "
	Apr 29 11:43:46 addons-760922 kubelet[1488]: I0429 11:43:46.291941    1488 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lpgbz\" (UniqueName: \"kubernetes.io/projected/55fed84e-6197-4372-857c-598bbe503660-kube-api-access-lpgbz\") pod \"55fed84e-6197-4372-857c-598bbe503660\" (UID: \"55fed84e-6197-4372-857c-598bbe503660\") "
	Apr 29 11:43:46 addons-760922 kubelet[1488]: I0429 11:43:46.293289    1488 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55fed84e-6197-4372-857c-598bbe503660-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "55fed84e-6197-4372-857c-598bbe503660" (UID: "55fed84e-6197-4372-857c-598bbe503660"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Apr 29 11:43:46 addons-760922 kubelet[1488]: I0429 11:43:46.299222    1488 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55fed84e-6197-4372-857c-598bbe503660-kube-api-access-lpgbz" (OuterVolumeSpecName: "kube-api-access-lpgbz") pod "55fed84e-6197-4372-857c-598bbe503660" (UID: "55fed84e-6197-4372-857c-598bbe503660"). InnerVolumeSpecName "kube-api-access-lpgbz". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 29 11:43:46 addons-760922 kubelet[1488]: I0429 11:43:46.392881    1488 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-lpgbz\" (UniqueName: \"kubernetes.io/projected/55fed84e-6197-4372-857c-598bbe503660-kube-api-access-lpgbz\") on node \"addons-760922\" DevicePath \"\""
	Apr 29 11:43:46 addons-760922 kubelet[1488]: I0429 11:43:46.392925    1488 reconciler_common.go:289] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/55fed84e-6197-4372-857c-598bbe503660-tmp-dir\") on node \"addons-760922\" DevicePath \"\""
	Apr 29 11:43:46 addons-760922 kubelet[1488]: I0429 11:43:46.735013    1488 scope.go:117] "RemoveContainer" containerID="052392dd63f2aa05948d04f335570315575ff7676cd7411497c545d8174bfcc1"
	Apr 29 11:43:46 addons-760922 kubelet[1488]: I0429 11:43:46.760860    1488 scope.go:117] "RemoveContainer" containerID="052392dd63f2aa05948d04f335570315575ff7676cd7411497c545d8174bfcc1"
	Apr 29 11:43:46 addons-760922 kubelet[1488]: E0429 11:43:46.761270    1488 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"052392dd63f2aa05948d04f335570315575ff7676cd7411497c545d8174bfcc1\": container with ID starting with 052392dd63f2aa05948d04f335570315575ff7676cd7411497c545d8174bfcc1 not found: ID does not exist" containerID="052392dd63f2aa05948d04f335570315575ff7676cd7411497c545d8174bfcc1"
	Apr 29 11:43:46 addons-760922 kubelet[1488]: I0429 11:43:46.761303    1488 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"052392dd63f2aa05948d04f335570315575ff7676cd7411497c545d8174bfcc1"} err="failed to get container status \"052392dd63f2aa05948d04f335570315575ff7676cd7411497c545d8174bfcc1\": rpc error: code = NotFound desc = could not find container \"052392dd63f2aa05948d04f335570315575ff7676cd7411497c545d8174bfcc1\": container with ID starting with 052392dd63f2aa05948d04f335570315575ff7676cd7411497c545d8174bfcc1 not found: ID does not exist"
	
	
	==> storage-provisioner [ea33593fdd9125a40506b352fa2ad2a69677db18151afaf9d4dc618415b44b54] <==
	I0429 11:35:39.730021       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0429 11:35:39.807463       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0429 11:35:39.807512       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0429 11:35:39.817005       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0429 11:35:39.817350       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-760922_6c501549-51d6-4626-92bc-f0d5b966207a!
	I0429 11:35:39.817453       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"274d0fc0-fa2f-4f39-a170-3e2ee4d9c376", APIVersion:"v1", ResourceVersion:"925", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-760922_6c501549-51d6-4626-92bc-f0d5b966207a became leader
	I0429 11:35:39.917827       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-760922_6c501549-51d6-4626-92bc-f0d5b966207a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-760922 -n addons-760922
helpers_test.go:261: (dbg) Run:  kubectl --context addons-760922 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (325.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (377.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-425197 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-425197 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 102 (6m13.594914004s)

                                                
                                                
-- stdout --
	* [old-k8s-version-425197] minikube v1.33.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18756
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18756-1231546/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18756-1231546/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-425197" primary control-plane node in "old-k8s-version-425197" cluster
	* Pulling base image v0.0.43-1713736339-18706 ...
	* Restarting existing docker container for "old-k8s-version-425197" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.24.6 ...
	* Verifying Kubernetes components...
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-425197 addons enable metrics-server
	
	* Enabled addons: storage-provisioner, dashboard, metrics-server, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 12:29:30.062458 1423526 out.go:291] Setting OutFile to fd 1 ...
	I0429 12:29:30.062692 1423526 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:29:30.062733 1423526 out.go:304] Setting ErrFile to fd 2...
	I0429 12:29:30.062761 1423526 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:29:30.063072 1423526 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18756-1231546/.minikube/bin
	I0429 12:29:30.063520 1423526 out.go:298] Setting JSON to false
	I0429 12:29:30.064594 1423526 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":29514,"bootTime":1714364256,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0429 12:29:30.064760 1423526 start.go:139] virtualization:  
	I0429 12:29:30.068038 1423526 out.go:177] * [old-k8s-version-425197] minikube v1.33.0 on Ubuntu 20.04 (arm64)
	I0429 12:29:30.070727 1423526 out.go:177]   - MINIKUBE_LOCATION=18756
	I0429 12:29:30.070805 1423526 notify.go:220] Checking for updates...
	I0429 12:29:30.075664 1423526 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 12:29:30.078225 1423526 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18756-1231546/kubeconfig
	I0429 12:29:30.080162 1423526 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18756-1231546/.minikube
	I0429 12:29:30.082742 1423526 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0429 12:29:30.084773 1423526 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 12:29:30.087605 1423526 config.go:182] Loaded profile config "old-k8s-version-425197": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0429 12:29:30.090500 1423526 out.go:177] * Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	I0429 12:29:30.092455 1423526 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 12:29:30.118817 1423526 docker.go:122] docker version: linux-26.1.0:Docker Engine - Community
	I0429 12:29:30.118950 1423526 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0429 12:29:30.204205 1423526 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:true NGoroutines:67 SystemTime:2024-04-29 12:29:30.192883962 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.1.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0429 12:29:30.204326 1423526 docker.go:295] overlay module found
	I0429 12:29:30.206708 1423526 out.go:177] * Using the docker driver based on existing profile
	I0429 12:29:30.208918 1423526 start.go:297] selected driver: docker
	I0429 12:29:30.208940 1423526 start.go:901] validating driver "docker" against &{Name:old-k8s-version-425197 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-425197 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountStr
ing:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 12:29:30.209067 1423526 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 12:29:30.209716 1423526 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0429 12:29:30.280551 1423526 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:true NGoroutines:67 SystemTime:2024-04-29 12:29:30.269083528 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.1.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0429 12:29:30.280932 1423526 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 12:29:30.280984 1423526 cni.go:84] Creating CNI manager for ""
	I0429 12:29:30.280997 1423526 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0429 12:29:30.281037 1423526 start.go:340] cluster config:
	{Name:old-k8s-version-425197 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-425197 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 12:29:30.283550 1423526 out.go:177] * Starting "old-k8s-version-425197" primary control-plane node in "old-k8s-version-425197" cluster
	I0429 12:29:30.285717 1423526 cache.go:121] Beginning downloading kic base image for docker with crio
	I0429 12:29:30.288273 1423526 out.go:177] * Pulling base image v0.0.43-1713736339-18706 ...
	I0429 12:29:30.290623 1423526 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0429 12:29:30.290676 1423526 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18756-1231546/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I0429 12:29:30.290686 1423526 cache.go:56] Caching tarball of preloaded images
	I0429 12:29:30.290789 1423526 preload.go:173] Found /home/jenkins/minikube-integration/18756-1231546/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0429 12:29:30.290799 1423526 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0429 12:29:30.290905 1423526 profile.go:143] Saving config to /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/old-k8s-version-425197/config.json ...
	I0429 12:29:30.291129 1423526 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon
	I0429 12:29:30.306759 1423526 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon, skipping pull
	I0429 12:29:30.306787 1423526 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e exists in daemon, skipping load
	I0429 12:29:30.306812 1423526 cache.go:194] Successfully downloaded all kic artifacts
	I0429 12:29:30.306845 1423526 start.go:360] acquireMachinesLock for old-k8s-version-425197: {Name:mkea9c46670743e09dc8380ddda62d78a2dbc9b1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 12:29:30.306921 1423526 start.go:364] duration metric: took 48.262µs to acquireMachinesLock for "old-k8s-version-425197"
	I0429 12:29:30.306947 1423526 start.go:96] Skipping create...Using existing machine configuration
	I0429 12:29:30.306965 1423526 fix.go:54] fixHost starting: 
	I0429 12:29:30.307237 1423526 cli_runner.go:164] Run: docker container inspect old-k8s-version-425197 --format={{.State.Status}}
	I0429 12:29:30.324499 1423526 fix.go:112] recreateIfNeeded on old-k8s-version-425197: state=Stopped err=<nil>
	W0429 12:29:30.324531 1423526 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 12:29:30.327276 1423526 out.go:177] * Restarting existing docker container for "old-k8s-version-425197" ...
	I0429 12:29:30.329596 1423526 cli_runner.go:164] Run: docker start old-k8s-version-425197
	I0429 12:29:30.709883 1423526 cli_runner.go:164] Run: docker container inspect old-k8s-version-425197 --format={{.State.Status}}
	I0429 12:29:30.769474 1423526 kic.go:430] container "old-k8s-version-425197" state is running.
	I0429 12:29:30.769834 1423526 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-425197
	I0429 12:29:30.794524 1423526 profile.go:143] Saving config to /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/old-k8s-version-425197/config.json ...
	I0429 12:29:30.794745 1423526 machine.go:94] provisionDockerMachine start ...
	I0429 12:29:30.794803 1423526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-425197
	I0429 12:29:30.813270 1423526 main.go:141] libmachine: Using SSH client type: native
	I0429 12:29:30.815504 1423526 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 34568 <nil> <nil>}
	I0429 12:29:30.815543 1423526 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 12:29:30.823080 1423526 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0429 12:29:33.964317 1423526 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-425197
	
	I0429 12:29:33.964343 1423526 ubuntu.go:169] provisioning hostname "old-k8s-version-425197"
	I0429 12:29:33.964443 1423526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-425197
	I0429 12:29:33.985801 1423526 main.go:141] libmachine: Using SSH client type: native
	I0429 12:29:33.986044 1423526 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 34568 <nil> <nil>}
	I0429 12:29:33.986061 1423526 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-425197 && echo "old-k8s-version-425197" | sudo tee /etc/hostname
	I0429 12:29:34.130647 1423526 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-425197
	
	I0429 12:29:34.130815 1423526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-425197
	I0429 12:29:34.153519 1423526 main.go:141] libmachine: Using SSH client type: native
	I0429 12:29:34.153772 1423526 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 34568 <nil> <nil>}
	I0429 12:29:34.153790 1423526 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-425197' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-425197/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-425197' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 12:29:34.289158 1423526 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 12:29:34.289187 1423526 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18756-1231546/.minikube CaCertPath:/home/jenkins/minikube-integration/18756-1231546/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18756-1231546/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18756-1231546/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18756-1231546/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18756-1231546/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18756-1231546/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18756-1231546/.minikube}
	I0429 12:29:34.289224 1423526 ubuntu.go:177] setting up certificates
	I0429 12:29:34.289234 1423526 provision.go:84] configureAuth start
	I0429 12:29:34.289311 1423526 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-425197
	I0429 12:29:34.311521 1423526 provision.go:143] copyHostCerts
	I0429 12:29:34.311591 1423526 exec_runner.go:144] found /home/jenkins/minikube-integration/18756-1231546/.minikube/ca.pem, removing ...
	I0429 12:29:34.311604 1423526 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18756-1231546/.minikube/ca.pem
	I0429 12:29:34.311681 1423526 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18756-1231546/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18756-1231546/.minikube/ca.pem (1082 bytes)
	I0429 12:29:34.311793 1423526 exec_runner.go:144] found /home/jenkins/minikube-integration/18756-1231546/.minikube/cert.pem, removing ...
	I0429 12:29:34.311808 1423526 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18756-1231546/.minikube/cert.pem
	I0429 12:29:34.311839 1423526 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18756-1231546/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18756-1231546/.minikube/cert.pem (1123 bytes)
	I0429 12:29:34.311904 1423526 exec_runner.go:144] found /home/jenkins/minikube-integration/18756-1231546/.minikube/key.pem, removing ...
	I0429 12:29:34.311913 1423526 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18756-1231546/.minikube/key.pem
	I0429 12:29:34.311942 1423526 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18756-1231546/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18756-1231546/.minikube/key.pem (1675 bytes)
	I0429 12:29:34.312004 1423526 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18756-1231546/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18756-1231546/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18756-1231546/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-425197 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-425197]
	I0429 12:29:34.652065 1423526 provision.go:177] copyRemoteCerts
	I0429 12:29:34.652140 1423526 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 12:29:34.652189 1423526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-425197
	I0429 12:29:34.669758 1423526 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34568 SSHKeyPath:/home/jenkins/minikube-integration/18756-1231546/.minikube/machines/old-k8s-version-425197/id_rsa Username:docker}
	I0429 12:29:34.761563 1423526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18756-1231546/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0429 12:29:34.785950 1423526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18756-1231546/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0429 12:29:34.808878 1423526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18756-1231546/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0429 12:29:34.833114 1423526 provision.go:87] duration metric: took 543.861833ms to configureAuth
	I0429 12:29:34.833135 1423526 ubuntu.go:193] setting minikube options for container-runtime
	I0429 12:29:34.833337 1423526 config.go:182] Loaded profile config "old-k8s-version-425197": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0429 12:29:34.833441 1423526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-425197
	I0429 12:29:34.852431 1423526 main.go:141] libmachine: Using SSH client type: native
	I0429 12:29:34.852697 1423526 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 34568 <nil> <nil>}
	I0429 12:29:34.852715 1423526 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 12:29:35.267708 1423526 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 12:29:35.267732 1423526 machine.go:97] duration metric: took 4.472978118s to provisionDockerMachine
	I0429 12:29:35.267744 1423526 start.go:293] postStartSetup for "old-k8s-version-425197" (driver="docker")
	I0429 12:29:35.267756 1423526 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 12:29:35.267820 1423526 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 12:29:35.267877 1423526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-425197
	I0429 12:29:35.292576 1423526 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34568 SSHKeyPath:/home/jenkins/minikube-integration/18756-1231546/.minikube/machines/old-k8s-version-425197/id_rsa Username:docker}
	I0429 12:29:35.398729 1423526 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 12:29:35.402513 1423526 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0429 12:29:35.402548 1423526 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0429 12:29:35.402559 1423526 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0429 12:29:35.402566 1423526 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0429 12:29:35.402576 1423526 filesync.go:126] Scanning /home/jenkins/minikube-integration/18756-1231546/.minikube/addons for local assets ...
	I0429 12:29:35.402629 1423526 filesync.go:126] Scanning /home/jenkins/minikube-integration/18756-1231546/.minikube/files for local assets ...
	I0429 12:29:35.402705 1423526 filesync.go:149] local asset: /home/jenkins/minikube-integration/18756-1231546/.minikube/files/etc/ssl/certs/12369742.pem -> 12369742.pem in /etc/ssl/certs
	I0429 12:29:35.402814 1423526 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 12:29:35.417886 1423526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18756-1231546/.minikube/files/etc/ssl/certs/12369742.pem --> /etc/ssl/certs/12369742.pem (1708 bytes)
	I0429 12:29:35.453973 1423526 start.go:296] duration metric: took 186.214435ms for postStartSetup
	I0429 12:29:35.454121 1423526 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 12:29:35.454212 1423526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-425197
	I0429 12:29:35.477351 1423526 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34568 SSHKeyPath:/home/jenkins/minikube-integration/18756-1231546/.minikube/machines/old-k8s-version-425197/id_rsa Username:docker}
	I0429 12:29:35.567007 1423526 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0429 12:29:35.573723 1423526 fix.go:56] duration metric: took 5.266758261s for fixHost
	I0429 12:29:35.573760 1423526 start.go:83] releasing machines lock for "old-k8s-version-425197", held for 5.266826126s
	I0429 12:29:35.573862 1423526 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-425197
	I0429 12:29:35.593358 1423526 ssh_runner.go:195] Run: cat /version.json
	I0429 12:29:35.593421 1423526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-425197
	I0429 12:29:35.593427 1423526 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 12:29:35.593496 1423526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-425197
	I0429 12:29:35.627452 1423526 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34568 SSHKeyPath:/home/jenkins/minikube-integration/18756-1231546/.minikube/machines/old-k8s-version-425197/id_rsa Username:docker}
	I0429 12:29:35.648488 1423526 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34568 SSHKeyPath:/home/jenkins/minikube-integration/18756-1231546/.minikube/machines/old-k8s-version-425197/id_rsa Username:docker}
	I0429 12:29:35.740788 1423526 ssh_runner.go:195] Run: systemctl --version
	I0429 12:29:35.857265 1423526 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 12:29:36.023791 1423526 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0429 12:29:36.028763 1423526 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 12:29:36.038847 1423526 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0429 12:29:36.038925 1423526 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 12:29:36.049502 1423526 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0429 12:29:36.049571 1423526 start.go:494] detecting cgroup driver to use...
	I0429 12:29:36.049605 1423526 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0429 12:29:36.049661 1423526 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 12:29:36.063979 1423526 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 12:29:36.077463 1423526 docker.go:217] disabling cri-docker service (if available) ...
	I0429 12:29:36.077527 1423526 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 12:29:36.092614 1423526 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 12:29:36.105899 1423526 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 12:29:36.244475 1423526 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 12:29:36.368388 1423526 docker.go:233] disabling docker service ...
	I0429 12:29:36.368509 1423526 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 12:29:36.385896 1423526 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 12:29:36.399631 1423526 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 12:29:36.508010 1423526 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 12:29:36.620581 1423526 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 12:29:36.634334 1423526 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 12:29:36.653945 1423526 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0429 12:29:36.654065 1423526 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 12:29:36.664869 1423526 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 12:29:36.664991 1423526 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 12:29:36.675620 1423526 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 12:29:36.686496 1423526 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 12:29:36.696937 1423526 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 12:29:36.706600 1423526 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 12:29:36.715963 1423526 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 12:29:36.724850 1423526 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:29:36.836251 1423526 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 12:29:36.976203 1423526 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 12:29:36.976361 1423526 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 12:29:36.983700 1423526 start.go:562] Will wait 60s for crictl version
	I0429 12:29:36.983832 1423526 ssh_runner.go:195] Run: which crictl
	I0429 12:29:36.990597 1423526 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 12:29:37.061074 1423526 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0429 12:29:37.061199 1423526 ssh_runner.go:195] Run: crio --version
	I0429 12:29:37.123211 1423526 ssh_runner.go:195] Run: crio --version
	I0429 12:29:37.174959 1423526 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.24.6 ...
	I0429 12:29:37.176968 1423526 cli_runner.go:164] Run: docker network inspect old-k8s-version-425197 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0429 12:29:37.194411 1423526 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0429 12:29:37.198454 1423526 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 12:29:37.209791 1423526 kubeadm.go:877] updating cluster {Name:old-k8s-version-425197 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-425197 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins
:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 12:29:37.209909 1423526 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0429 12:29:37.209959 1423526 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 12:29:37.264751 1423526 crio.go:514] all images are preloaded for cri-o runtime.
	I0429 12:29:37.264778 1423526 crio.go:433] Images already preloaded, skipping extraction
	I0429 12:29:37.264831 1423526 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 12:29:37.314169 1423526 crio.go:514] all images are preloaded for cri-o runtime.
	I0429 12:29:37.314205 1423526 cache_images.go:84] Images are preloaded, skipping loading
	I0429 12:29:37.314238 1423526 kubeadm.go:928] updating node { 192.168.85.2 8443 v1.20.0 crio true true} ...
	I0429 12:29:37.314384 1423526 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=old-k8s-version-425197 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-425197 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 12:29:37.314505 1423526 ssh_runner.go:195] Run: crio config
	I0429 12:29:37.412337 1423526 cni.go:84] Creating CNI manager for ""
	I0429 12:29:37.412363 1423526 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0429 12:29:37.412399 1423526 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 12:29:37.412427 1423526 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-425197 NodeName:old-k8s-version-425197 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0429 12:29:37.412620 1423526 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-425197"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 12:29:37.412763 1423526 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0429 12:29:37.422676 1423526 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 12:29:37.422772 1423526 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0429 12:29:37.432375 1423526 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (480 bytes)
	I0429 12:29:37.452696 1423526 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 12:29:37.473000 1423526 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2117 bytes)
	I0429 12:29:37.494379 1423526 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0429 12:29:37.498637 1423526 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 12:29:37.510457 1423526 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:29:37.623347 1423526 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 12:29:37.639876 1423526 certs.go:68] Setting up /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/old-k8s-version-425197 for IP: 192.168.85.2
	I0429 12:29:37.639898 1423526 certs.go:194] generating shared ca certs ...
	I0429 12:29:37.639935 1423526 certs.go:226] acquiring lock for ca certs: {Name:mkcd7972b318778b7d6fba570abab6a01a410b21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:29:37.640101 1423526 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18756-1231546/.minikube/ca.key
	I0429 12:29:37.640186 1423526 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18756-1231546/.minikube/proxy-client-ca.key
	I0429 12:29:37.640201 1423526 certs.go:256] generating profile certs ...
	I0429 12:29:37.640337 1423526 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/old-k8s-version-425197/client.key
	I0429 12:29:37.640433 1423526 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/old-k8s-version-425197/apiserver.key.76e8b9fd
	I0429 12:29:37.640500 1423526 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/old-k8s-version-425197/proxy-client.key
	I0429 12:29:37.640639 1423526 certs.go:484] found cert: /home/jenkins/minikube-integration/18756-1231546/.minikube/certs/1236974.pem (1338 bytes)
	W0429 12:29:37.640707 1423526 certs.go:480] ignoring /home/jenkins/minikube-integration/18756-1231546/.minikube/certs/1236974_empty.pem, impossibly tiny 0 bytes
	I0429 12:29:37.640722 1423526 certs.go:484] found cert: /home/jenkins/minikube-integration/18756-1231546/.minikube/certs/ca-key.pem (1679 bytes)
	I0429 12:29:37.640747 1423526 certs.go:484] found cert: /home/jenkins/minikube-integration/18756-1231546/.minikube/certs/ca.pem (1082 bytes)
	I0429 12:29:37.640808 1423526 certs.go:484] found cert: /home/jenkins/minikube-integration/18756-1231546/.minikube/certs/cert.pem (1123 bytes)
	I0429 12:29:37.640854 1423526 certs.go:484] found cert: /home/jenkins/minikube-integration/18756-1231546/.minikube/certs/key.pem (1675 bytes)
	I0429 12:29:37.640943 1423526 certs.go:484] found cert: /home/jenkins/minikube-integration/18756-1231546/.minikube/files/etc/ssl/certs/12369742.pem (1708 bytes)
	I0429 12:29:37.641710 1423526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18756-1231546/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 12:29:37.729114 1423526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18756-1231546/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0429 12:29:37.798965 1423526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18756-1231546/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 12:29:37.839700 1423526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18756-1231546/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 12:29:37.875448 1423526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/old-k8s-version-425197/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0429 12:29:37.902966 1423526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/old-k8s-version-425197/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0429 12:29:37.928545 1423526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/old-k8s-version-425197/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 12:29:37.954321 1423526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/old-k8s-version-425197/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0429 12:29:37.980738 1423526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18756-1231546/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 12:29:38.009902 1423526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18756-1231546/.minikube/certs/1236974.pem --> /usr/share/ca-certificates/1236974.pem (1338 bytes)
	I0429 12:29:38.039446 1423526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18756-1231546/.minikube/files/etc/ssl/certs/12369742.pem --> /usr/share/ca-certificates/12369742.pem (1708 bytes)
	I0429 12:29:38.068352 1423526 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 12:29:38.090392 1423526 ssh_runner.go:195] Run: openssl version
	I0429 12:29:38.097198 1423526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 12:29:38.107920 1423526 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 12:29:38.112106 1423526 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 11:34 /usr/share/ca-certificates/minikubeCA.pem
	I0429 12:29:38.112192 1423526 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 12:29:38.119788 1423526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 12:29:38.130096 1423526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1236974.pem && ln -fs /usr/share/ca-certificates/1236974.pem /etc/ssl/certs/1236974.pem"
	I0429 12:29:38.140550 1423526 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1236974.pem
	I0429 12:29:38.144755 1423526 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 11:45 /usr/share/ca-certificates/1236974.pem
	I0429 12:29:38.144847 1423526 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1236974.pem
	I0429 12:29:38.152782 1423526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1236974.pem /etc/ssl/certs/51391683.0"
	I0429 12:29:38.162869 1423526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12369742.pem && ln -fs /usr/share/ca-certificates/12369742.pem /etc/ssl/certs/12369742.pem"
	I0429 12:29:38.173437 1423526 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12369742.pem
	I0429 12:29:38.177934 1423526 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 11:45 /usr/share/ca-certificates/12369742.pem
	I0429 12:29:38.178030 1423526 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12369742.pem
	I0429 12:29:38.185799 1423526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12369742.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 12:29:38.195525 1423526 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 12:29:38.199418 1423526 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0429 12:29:38.206455 1423526 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0429 12:29:38.213462 1423526 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0429 12:29:38.220318 1423526 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0429 12:29:38.227369 1423526 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0429 12:29:38.234352 1423526 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0429 12:29:38.241307 1423526 kubeadm.go:391] StartCluster: {Name:old-k8s-version-425197 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-425197 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/m
inikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 12:29:38.241462 1423526 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0429 12:29:38.241558 1423526 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 12:29:38.286028 1423526 cri.go:89] found id: ""
	I0429 12:29:38.286178 1423526 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0429 12:29:38.296535 1423526 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0429 12:29:38.296609 1423526 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0429 12:29:38.296629 1423526 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0429 12:29:38.296734 1423526 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0429 12:29:38.306485 1423526 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0429 12:29:38.307040 1423526 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-425197" does not appear in /home/jenkins/minikube-integration/18756-1231546/kubeconfig
	I0429 12:29:38.307211 1423526 kubeconfig.go:62] /home/jenkins/minikube-integration/18756-1231546/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-425197" cluster setting kubeconfig missing "old-k8s-version-425197" context setting]
	I0429 12:29:38.307587 1423526 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18756-1231546/kubeconfig: {Name:mk3a783043373f26fbcf8c9fca1b15742ae22d84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:29:38.309218 1423526 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0429 12:29:38.319147 1423526 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.85.2
	I0429 12:29:38.319227 1423526 kubeadm.go:591] duration metric: took 22.568588ms to restartPrimaryControlPlane
	I0429 12:29:38.319249 1423526 kubeadm.go:393] duration metric: took 77.951261ms to StartCluster
	I0429 12:29:38.319291 1423526 settings.go:142] acquiring lock: {Name:mk0ef22430695db96615335cd2f3ba564b8d0f0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:29:38.319379 1423526 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18756-1231546/kubeconfig
	I0429 12:29:38.320223 1423526 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18756-1231546/kubeconfig: {Name:mk3a783043373f26fbcf8c9fca1b15742ae22d84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:29:38.320507 1423526 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 12:29:38.325258 1423526 out.go:177] * Verifying Kubernetes components...
	I0429 12:29:38.320962 1423526 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0429 12:29:38.321608 1423526 config.go:182] Loaded profile config "old-k8s-version-425197": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0429 12:29:38.327231 1423526 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:29:38.325574 1423526 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-425197"
	I0429 12:29:38.327561 1423526 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-425197"
	W0429 12:29:38.327590 1423526 addons.go:243] addon storage-provisioner should already be in state true
	I0429 12:29:38.327651 1423526 host.go:66] Checking if "old-k8s-version-425197" exists ...
	I0429 12:29:38.328187 1423526 cli_runner.go:164] Run: docker container inspect old-k8s-version-425197 --format={{.State.Status}}
	I0429 12:29:38.325584 1423526 addons.go:69] Setting dashboard=true in profile "old-k8s-version-425197"
	I0429 12:29:38.328498 1423526 addons.go:234] Setting addon dashboard=true in "old-k8s-version-425197"
	W0429 12:29:38.328519 1423526 addons.go:243] addon dashboard should already be in state true
	I0429 12:29:38.328567 1423526 host.go:66] Checking if "old-k8s-version-425197" exists ...
	I0429 12:29:38.329207 1423526 cli_runner.go:164] Run: docker container inspect old-k8s-version-425197 --format={{.State.Status}}
	I0429 12:29:38.325591 1423526 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-425197"
	I0429 12:29:38.329697 1423526 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-425197"
	I0429 12:29:38.329960 1423526 cli_runner.go:164] Run: docker container inspect old-k8s-version-425197 --format={{.State.Status}}
	I0429 12:29:38.325599 1423526 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-425197"
	I0429 12:29:38.330211 1423526 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-425197"
	W0429 12:29:38.330220 1423526 addons.go:243] addon metrics-server should already be in state true
	I0429 12:29:38.330243 1423526 host.go:66] Checking if "old-k8s-version-425197" exists ...
	I0429 12:29:38.330634 1423526 cli_runner.go:164] Run: docker container inspect old-k8s-version-425197 --format={{.State.Status}}
	I0429 12:29:38.375235 1423526 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0429 12:29:38.381104 1423526 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0429 12:29:38.386379 1423526 addons.go:426] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0429 12:29:38.386405 1423526 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0429 12:29:38.386477 1423526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-425197
	I0429 12:29:38.410651 1423526 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 12:29:38.410297 1423526 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-425197"
	W0429 12:29:38.412495 1423526 addons.go:243] addon default-storageclass should already be in state true
	I0429 12:29:38.412532 1423526 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 12:29:38.412547 1423526 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0429 12:29:38.412550 1423526 host.go:66] Checking if "old-k8s-version-425197" exists ...
	I0429 12:29:38.412604 1423526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-425197
	I0429 12:29:38.413067 1423526 cli_runner.go:164] Run: docker container inspect old-k8s-version-425197 --format={{.State.Status}}
	I0429 12:29:38.424565 1423526 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0429 12:29:38.428651 1423526 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0429 12:29:38.428705 1423526 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0429 12:29:38.428776 1423526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-425197
	I0429 12:29:38.456778 1423526 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34568 SSHKeyPath:/home/jenkins/minikube-integration/18756-1231546/.minikube/machines/old-k8s-version-425197/id_rsa Username:docker}
	I0429 12:29:38.463165 1423526 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34568 SSHKeyPath:/home/jenkins/minikube-integration/18756-1231546/.minikube/machines/old-k8s-version-425197/id_rsa Username:docker}
	I0429 12:29:38.488861 1423526 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0429 12:29:38.488889 1423526 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0429 12:29:38.488952 1423526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-425197
	I0429 12:29:38.489728 1423526 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34568 SSHKeyPath:/home/jenkins/minikube-integration/18756-1231546/.minikube/machines/old-k8s-version-425197/id_rsa Username:docker}
	I0429 12:29:38.520773 1423526 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34568 SSHKeyPath:/home/jenkins/minikube-integration/18756-1231546/.minikube/machines/old-k8s-version-425197/id_rsa Username:docker}
	I0429 12:29:38.630610 1423526 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 12:29:38.643823 1423526 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 12:29:38.674214 1423526 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-425197" to be "Ready" ...
	I0429 12:29:38.707173 1423526 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0429 12:29:38.707236 1423526 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0429 12:29:38.727884 1423526 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0429 12:29:38.761440 1423526 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0429 12:29:38.761463 1423526 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0429 12:29:38.790352 1423526 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0429 12:29:38.790375 1423526 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0429 12:29:38.859363 1423526 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0429 12:29:38.859385 1423526 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0429 12:29:38.863572 1423526 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0429 12:29:38.863636 1423526 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0429 12:29:38.916181 1423526 addons.go:426] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0429 12:29:38.916248 1423526 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0429 12:29:38.923877 1423526 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0429 12:29:38.959498 1423526 addons.go:426] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0429 12:29:38.959564 1423526 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W0429 12:29:39.004837 1423526 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0429 12:29:39.004955 1423526 retry.go:31] will retry after 319.756205ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0429 12:29:39.044402 1423526 addons.go:426] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0429 12:29:39.044469 1423526 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W0429 12:29:39.094933 1423526 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0429 12:29:39.095008 1423526 retry.go:31] will retry after 283.800387ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0429 12:29:39.110908 1423526 addons.go:426] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0429 12:29:39.110986 1423526 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	W0429 12:29:39.118896 1423526 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0429 12:29:39.118977 1423526 retry.go:31] will retry after 274.747985ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0429 12:29:39.140125 1423526 addons.go:426] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0429 12:29:39.140193 1423526 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0429 12:29:39.160352 1423526 addons.go:426] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0429 12:29:39.160380 1423526 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0429 12:29:39.186907 1423526 addons.go:426] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0429 12:29:39.186994 1423526 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0429 12:29:39.217354 1423526 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0429 12:29:39.312535 1423526 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0429 12:29:39.312571 1423526 retry.go:31] will retry after 265.440094ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0429 12:29:39.325860 1423526 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 12:29:39.379273 1423526 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0429 12:29:39.394685 1423526 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0429 12:29:39.437157 1423526 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0429 12:29:39.437195 1423526 retry.go:31] will retry after 338.595367ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0429 12:29:39.512262 1423526 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0429 12:29:39.512302 1423526 retry.go:31] will retry after 437.575066ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0429 12:29:39.568269 1423526 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0429 12:29:39.568304 1423526 retry.go:31] will retry after 451.297571ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0429 12:29:39.578471 1423526 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0429 12:29:39.693126 1423526 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0429 12:29:39.693168 1423526 retry.go:31] will retry after 298.918696ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0429 12:29:39.776539 1423526 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0429 12:29:39.870874 1423526 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0429 12:29:39.870915 1423526 retry.go:31] will retry after 541.7755ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0429 12:29:39.950281 1423526 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0429 12:29:39.992981 1423526 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0429 12:29:40.020553 1423526 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0429 12:29:40.130252 1423526 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0429 12:29:40.130333 1423526 retry.go:31] will retry after 841.066499ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0429 12:29:40.308357 1423526 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0429 12:29:40.308398 1423526 retry.go:31] will retry after 680.408618ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0429 12:29:40.308504 1423526 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0429 12:29:40.308519 1423526 retry.go:31] will retry after 542.233805ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0429 12:29:40.412900 1423526 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0429 12:29:40.479794 1423526 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0429 12:29:40.479826 1423526 retry.go:31] will retry after 912.439906ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0429 12:29:40.675531 1423526 node_ready.go:53] error getting node "old-k8s-version-425197": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-425197": dial tcp 192.168.85.2:8443: connect: connection refused
	I0429 12:29:40.851844 1423526 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0429 12:29:40.972427 1423526 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0429 12:29:40.988055 1423526 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0429 12:29:40.988159 1423526 retry.go:31] will retry after 735.396676ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0429 12:29:40.989289 1423526 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0429 12:29:41.160258 1423526 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0429 12:29:41.160358 1423526 retry.go:31] will retry after 814.498657ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0429 12:29:41.182957 1423526 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0429 12:29:41.183042 1423526 retry.go:31] will retry after 466.844422ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0429 12:29:41.393451 1423526 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0429 12:29:41.530554 1423526 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0429 12:29:41.530645 1423526 retry.go:31] will retry after 1.357697339s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0429 12:29:41.650964 1423526 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0429 12:29:41.723878 1423526 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0429 12:29:41.786910 1423526 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0429 12:29:41.786984 1423526 retry.go:31] will retry after 1.32078294s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0429 12:29:41.884757 1423526 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0429 12:29:41.884835 1423526 retry.go:31] will retry after 886.437374ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0429 12:29:41.975156 1423526 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0429 12:29:42.092616 1423526 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0429 12:29:42.092763 1423526 retry.go:31] will retry after 1.106535704s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0429 12:29:42.771623 1423526 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0429 12:29:42.889016 1423526 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0429 12:29:42.956478 1423526 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0429 12:29:42.956573 1423526 retry.go:31] will retry after 1.882216868s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0429 12:29:43.022210 1423526 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0429 12:29:43.022292 1423526 retry.go:31] will retry after 1.541895075s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0429 12:29:43.108586 1423526 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0429 12:29:43.175241 1423526 node_ready.go:53] error getting node "old-k8s-version-425197": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-425197": dial tcp 192.168.85.2:8443: connect: connection refused
	I0429 12:29:43.199521 1423526 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0429 12:29:43.274241 1423526 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0429 12:29:43.274341 1423526 retry.go:31] will retry after 2.715219101s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0429 12:29:43.345949 1423526 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0429 12:29:43.346025 1423526 retry.go:31] will retry after 1.820226638s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0429 12:29:44.565325 1423526 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0429 12:29:44.689160 1423526 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0429 12:29:44.689240 1423526 retry.go:31] will retry after 1.798482465s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0429 12:29:44.839617 1423526 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0429 12:29:44.956268 1423526 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0429 12:29:44.956361 1423526 retry.go:31] will retry after 2.071817613s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0429 12:29:45.167165 1423526 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0429 12:29:45.330247 1423526 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0429 12:29:45.330334 1423526 retry.go:31] will retry after 3.548115218s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0429 12:29:45.675129 1423526 node_ready.go:53] error getting node "old-k8s-version-425197": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-425197": dial tcp 192.168.85.2:8443: connect: connection refused
	I0429 12:29:45.990475 1423526 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0429 12:29:46.112209 1423526 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0429 12:29:46.112293 1423526 retry.go:31] will retry after 1.629993189s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0429 12:29:46.488122 1423526 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0429 12:29:46.563794 1423526 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0429 12:29:46.563828 1423526 retry.go:31] will retry after 3.366606992s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0429 12:29:47.028948 1423526 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0429 12:29:47.322030 1423526 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0429 12:29:47.322069 1423526 retry.go:31] will retry after 2.376504476s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0429 12:29:47.743369 1423526 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0429 12:29:48.878679 1423526 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0429 12:29:49.698860 1423526 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0429 12:29:49.931236 1423526 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 12:29:56.034033 1423526 node_ready.go:49] node "old-k8s-version-425197" has status "Ready":"True"
	I0429 12:29:56.034058 1423526 node_ready.go:38] duration metric: took 17.359768121s for node "old-k8s-version-425197" to be "Ready" ...
	I0429 12:29:56.034069 1423526 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 12:29:56.351914 1423526 pod_ready.go:78] waiting up to 6m0s for pod "coredns-74ff55c5b-hzh92" in "kube-system" namespace to be "Ready" ...
	I0429 12:29:56.651744 1423526 pod_ready.go:92] pod "coredns-74ff55c5b-hzh92" in "kube-system" namespace has status "Ready":"True"
	I0429 12:29:56.651822 1423526 pod_ready.go:81] duration metric: took 299.818944ms for pod "coredns-74ff55c5b-hzh92" in "kube-system" namespace to be "Ready" ...
	I0429 12:29:56.651848 1423526 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-425197" in "kube-system" namespace to be "Ready" ...
	I0429 12:29:57.490636 1423526 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.747209181s)
	I0429 12:29:57.492903 1423526 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-425197 addons enable metrics-server
	
	I0429 12:29:57.490747 1423526 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (8.612037224s)
	I0429 12:29:57.490817 1423526 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.791883052s)
	I0429 12:29:57.490861 1423526 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.559602701s)
	I0429 12:29:57.493344 1423526 addons.go:470] Verifying addon metrics-server=true in "old-k8s-version-425197"
	I0429 12:29:57.506880 1423526 out.go:177] * Enabled addons: storage-provisioner, dashboard, metrics-server, default-storageclass
	I0429 12:29:57.508901 1423526 addons.go:505] duration metric: took 19.187956652s for enable addons: enabled=[storage-provisioner dashboard metrics-server default-storageclass]
	I0429 12:29:58.662821 1423526 pod_ready.go:102] pod "etcd-old-k8s-version-425197" in "kube-system" namespace has status "Ready":"False"
	I0429 12:30:01.160333 1423526 pod_ready.go:102] pod "etcd-old-k8s-version-425197" in "kube-system" namespace has status "Ready":"False"
	I0429 12:30:03.658875 1423526 pod_ready.go:102] pod "etcd-old-k8s-version-425197" in "kube-system" namespace has status "Ready":"False"
	I0429 12:30:06.160600 1423526 pod_ready.go:102] pod "etcd-old-k8s-version-425197" in "kube-system" namespace has status "Ready":"False"
	I0429 12:30:08.659260 1423526 pod_ready.go:102] pod "etcd-old-k8s-version-425197" in "kube-system" namespace has status "Ready":"False"
	I0429 12:30:11.159044 1423526 pod_ready.go:102] pod "etcd-old-k8s-version-425197" in "kube-system" namespace has status "Ready":"False"
	I0429 12:30:13.165523 1423526 pod_ready.go:102] pod "etcd-old-k8s-version-425197" in "kube-system" namespace has status "Ready":"False"
	I0429 12:30:15.659578 1423526 pod_ready.go:102] pod "etcd-old-k8s-version-425197" in "kube-system" namespace has status "Ready":"False"
	I0429 12:30:18.159534 1423526 pod_ready.go:102] pod "etcd-old-k8s-version-425197" in "kube-system" namespace has status "Ready":"False"
	I0429 12:30:20.160852 1423526 pod_ready.go:102] pod "etcd-old-k8s-version-425197" in "kube-system" namespace has status "Ready":"False"
	I0429 12:30:22.192441 1423526 pod_ready.go:102] pod "etcd-old-k8s-version-425197" in "kube-system" namespace has status "Ready":"False"
	I0429 12:30:24.657368 1423526 pod_ready.go:102] pod "etcd-old-k8s-version-425197" in "kube-system" namespace has status "Ready":"False"
	I0429 12:30:26.659286 1423526 pod_ready.go:102] pod "etcd-old-k8s-version-425197" in "kube-system" namespace has status "Ready":"False"
	I0429 12:30:29.158032 1423526 pod_ready.go:102] pod "etcd-old-k8s-version-425197" in "kube-system" namespace has status "Ready":"False"
	I0429 12:30:31.158150 1423526 pod_ready.go:102] pod "etcd-old-k8s-version-425197" in "kube-system" namespace has status "Ready":"False"
	I0429 12:30:33.159696 1423526 pod_ready.go:102] pod "etcd-old-k8s-version-425197" in "kube-system" namespace has status "Ready":"False"
	I0429 12:30:35.659044 1423526 pod_ready.go:102] pod "etcd-old-k8s-version-425197" in "kube-system" namespace has status "Ready":"False"
	I0429 12:30:38.158319 1423526 pod_ready.go:102] pod "etcd-old-k8s-version-425197" in "kube-system" namespace has status "Ready":"False"
	I0429 12:30:40.159205 1423526 pod_ready.go:102] pod "etcd-old-k8s-version-425197" in "kube-system" namespace has status "Ready":"False"
	I0429 12:30:42.168787 1423526 pod_ready.go:102] pod "etcd-old-k8s-version-425197" in "kube-system" namespace has status "Ready":"False"
	I0429 12:30:44.657757 1423526 pod_ready.go:102] pod "etcd-old-k8s-version-425197" in "kube-system" namespace has status "Ready":"False"
	I0429 12:30:46.669613 1423526 pod_ready.go:102] pod "etcd-old-k8s-version-425197" in "kube-system" namespace has status "Ready":"False"
	I0429 12:30:49.157900 1423526 pod_ready.go:102] pod "etcd-old-k8s-version-425197" in "kube-system" namespace has status "Ready":"False"
	I0429 12:30:51.158663 1423526 pod_ready.go:102] pod "etcd-old-k8s-version-425197" in "kube-system" namespace has status "Ready":"False"
	I0429 12:30:53.158704 1423526 pod_ready.go:102] pod "etcd-old-k8s-version-425197" in "kube-system" namespace has status "Ready":"False"
	I0429 12:30:55.158763 1423526 pod_ready.go:102] pod "etcd-old-k8s-version-425197" in "kube-system" namespace has status "Ready":"False"
	I0429 12:30:57.658025 1423526 pod_ready.go:102] pod "etcd-old-k8s-version-425197" in "kube-system" namespace has status "Ready":"False"
	I0429 12:30:59.658548 1423526 pod_ready.go:102] pod "etcd-old-k8s-version-425197" in "kube-system" namespace has status "Ready":"False"
	I0429 12:31:01.658697 1423526 pod_ready.go:102] pod "etcd-old-k8s-version-425197" in "kube-system" namespace has status "Ready":"False"
	I0429 12:31:04.160012 1423526 pod_ready.go:102] pod "etcd-old-k8s-version-425197" in "kube-system" namespace has status "Ready":"False"
	I0429 12:31:06.160634 1423526 pod_ready.go:102] pod "etcd-old-k8s-version-425197" in "kube-system" namespace has status "Ready":"False"
	I0429 12:31:08.661388 1423526 pod_ready.go:102] pod "etcd-old-k8s-version-425197" in "kube-system" namespace has status "Ready":"False"
	I0429 12:31:11.157936 1423526 pod_ready.go:102] pod "etcd-old-k8s-version-425197" in "kube-system" namespace has status "Ready":"False"
	I0429 12:31:13.158470 1423526 pod_ready.go:102] pod "etcd-old-k8s-version-425197" in "kube-system" namespace has status "Ready":"False"
	I0429 12:31:15.158596 1423526 pod_ready.go:102] pod "etcd-old-k8s-version-425197" in "kube-system" namespace has status "Ready":"False"
	I0429 12:31:16.659039 1423526 pod_ready.go:92] pod "etcd-old-k8s-version-425197" in "kube-system" namespace has status "Ready":"True"
	I0429 12:31:16.659108 1423526 pod_ready.go:81] duration metric: took 1m20.007238923s for pod "etcd-old-k8s-version-425197" in "kube-system" namespace to be "Ready" ...
	I0429 12:31:16.659136 1423526 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-425197" in "kube-system" namespace to be "Ready" ...
	I0429 12:31:16.675507 1423526 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-425197" in "kube-system" namespace has status "Ready":"True"
	I0429 12:31:16.675579 1423526 pod_ready.go:81] duration metric: took 16.423539ms for pod "kube-apiserver-old-k8s-version-425197" in "kube-system" namespace to be "Ready" ...
	I0429 12:31:16.675605 1423526 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-425197" in "kube-system" namespace to be "Ready" ...
	I0429 12:31:18.682438 1423526 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-425197" in "kube-system" namespace has status "Ready":"False"
	I0429 12:31:20.685999 1423526 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-425197" in "kube-system" namespace has status "Ready":"True"
	I0429 12:31:20.686025 1423526 pod_ready.go:81] duration metric: took 4.010401085s for pod "kube-controller-manager-old-k8s-version-425197" in "kube-system" namespace to be "Ready" ...
	I0429 12:31:20.686037 1423526 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5hlxl" in "kube-system" namespace to be "Ready" ...
	I0429 12:31:20.691720 1423526 pod_ready.go:92] pod "kube-proxy-5hlxl" in "kube-system" namespace has status "Ready":"True"
	I0429 12:31:20.691748 1423526 pod_ready.go:81] duration metric: took 5.702894ms for pod "kube-proxy-5hlxl" in "kube-system" namespace to be "Ready" ...
	I0429 12:31:20.691758 1423526 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-425197" in "kube-system" namespace to be "Ready" ...
	I0429 12:31:20.696293 1423526 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-425197" in "kube-system" namespace has status "Ready":"True"
	I0429 12:31:20.696316 1423526 pod_ready.go:81] duration metric: took 4.549956ms for pod "kube-scheduler-old-k8s-version-425197" in "kube-system" namespace to be "Ready" ...
	I0429 12:31:20.696327 1423526 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace to be "Ready" ...
	I0429 12:31:22.702549 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:31:25.210005 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:31:27.702418 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:31:29.702592 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:31:32.203232 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:31:34.704517 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:31:37.202851 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:31:39.702782 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:31:41.709985 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:31:44.203529 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:31:46.708121 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:31:49.202526 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:31:51.202777 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:31:53.203345 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:31:55.203403 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:31:57.702567 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:31:59.702725 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:32:01.703117 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:32:04.202420 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:32:06.202813 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:32:08.702998 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:32:11.202188 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:32:13.202404 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:32:15.202648 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:32:17.203192 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:32:19.205236 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:32:21.702316 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:32:24.201812 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:32:26.202781 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:32:28.702682 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:32:31.202966 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:32:33.702116 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:32:35.703230 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:32:38.202119 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:32:40.203165 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:32:42.204576 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:32:44.703016 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:32:47.201415 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:32:49.209666 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:32:51.701956 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:32:53.702673 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:32:56.202663 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:32:58.702663 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:33:00.703215 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:33:02.703370 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:33:05.203534 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:33:07.703154 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:33:10.203145 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:33:12.702544 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:33:14.708076 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:33:17.203711 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:33:19.702677 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:33:21.702963 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:33:24.202732 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:33:26.208558 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:33:28.702913 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:33:31.202070 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:33:33.202486 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:33:35.701905 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:33:37.703024 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:33:40.202602 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:33:42.207391 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:33:44.702942 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:33:47.202990 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:33:49.203155 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:33:51.702215 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:33:53.702289 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:33:56.203241 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:33:58.702576 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:34:01.202075 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:34:03.203019 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:34:05.203195 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:34:07.702226 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:34:09.703978 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:34:12.203051 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:34:14.203120 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:34:16.702005 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:34:18.702679 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:34:20.702798 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:34:23.203385 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:34:25.703235 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:34:28.207637 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:34:30.702987 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:34:33.202872 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:34:35.702139 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:34:37.703764 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:34:40.206946 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:34:42.701852 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:34:44.702301 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:34:46.702688 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:34:49.203179 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:34:51.702389 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:34:53.702533 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:34:56.202760 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:34:58.702554 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:35:00.702815 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:35:03.202352 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:35:05.701627 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:35:07.703105 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:35:10.203203 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:35:12.203403 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:35:14.702578 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:35:16.703119 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:35:18.753152 1423526 pod_ready.go:102] pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace has status "Ready":"False"
	I0429 12:35:20.702356 1423526 pod_ready.go:81] duration metric: took 4m0.006015034s for pod "metrics-server-9975d5f86-jgp2v" in "kube-system" namespace to be "Ready" ...
	E0429 12:35:20.702382 1423526 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0429 12:35:20.702391 1423526 pod_ready.go:38] duration metric: took 5m24.668311478s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 12:35:20.702406 1423526 api_server.go:52] waiting for apiserver process to appear ...
	I0429 12:35:20.702434 1423526 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 12:35:20.702498 1423526 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 12:35:20.747258 1423526 cri.go:89] found id: "f70d0c48d399667aeefaece1ef554897ebeb68eb674e2b89d699347639d7798f"
	I0429 12:35:20.747280 1423526 cri.go:89] found id: ""
	I0429 12:35:20.747288 1423526 logs.go:276] 1 containers: [f70d0c48d399667aeefaece1ef554897ebeb68eb674e2b89d699347639d7798f]
	I0429 12:35:20.747353 1423526 ssh_runner.go:195] Run: which crictl
	I0429 12:35:20.750956 1423526 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 12:35:20.751035 1423526 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 12:35:20.793925 1423526 cri.go:89] found id: "f87e16673fad4748fc3431ca613237d1244ca35397a18f8cc52a6f1a55706439"
	I0429 12:35:20.793946 1423526 cri.go:89] found id: ""
	I0429 12:35:20.793954 1423526 logs.go:276] 1 containers: [f87e16673fad4748fc3431ca613237d1244ca35397a18f8cc52a6f1a55706439]
	I0429 12:35:20.794011 1423526 ssh_runner.go:195] Run: which crictl
	I0429 12:35:20.797663 1423526 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 12:35:20.797738 1423526 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 12:35:20.839736 1423526 cri.go:89] found id: "e7e73acd6e8014a42c001b36ed0f2d3c63e892feb1fc378fe2fdd2de335d53dc"
	I0429 12:35:20.839759 1423526 cri.go:89] found id: ""
	I0429 12:35:20.839768 1423526 logs.go:276] 1 containers: [e7e73acd6e8014a42c001b36ed0f2d3c63e892feb1fc378fe2fdd2de335d53dc]
	I0429 12:35:20.839825 1423526 ssh_runner.go:195] Run: which crictl
	I0429 12:35:20.843530 1423526 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 12:35:20.843609 1423526 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 12:35:20.886809 1423526 cri.go:89] found id: "2d3031452f789fb6a9b7327ac93e3217d5bdc6b6383542150d20d2538540820b"
	I0429 12:35:20.886833 1423526 cri.go:89] found id: ""
	I0429 12:35:20.886842 1423526 logs.go:276] 1 containers: [2d3031452f789fb6a9b7327ac93e3217d5bdc6b6383542150d20d2538540820b]
	I0429 12:35:20.886935 1423526 ssh_runner.go:195] Run: which crictl
	I0429 12:35:20.890565 1423526 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 12:35:20.890643 1423526 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 12:35:20.931053 1423526 cri.go:89] found id: "18f81edeecda84f01c57c2228b6bbacaea535fd1209a9525e4f4eb7e1ccec852"
	I0429 12:35:20.931116 1423526 cri.go:89] found id: ""
	I0429 12:35:20.931137 1423526 logs.go:276] 1 containers: [18f81edeecda84f01c57c2228b6bbacaea535fd1209a9525e4f4eb7e1ccec852]
	I0429 12:35:20.931214 1423526 ssh_runner.go:195] Run: which crictl
	I0429 12:35:20.934837 1423526 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 12:35:20.934936 1423526 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 12:35:20.974385 1423526 cri.go:89] found id: "f54a7f3a396ba3db9133a1183e07c7fc6d06a8419cfc3ff1a57aa605a963a445"
	I0429 12:35:20.974454 1423526 cri.go:89] found id: ""
	I0429 12:35:20.974476 1423526 logs.go:276] 1 containers: [f54a7f3a396ba3db9133a1183e07c7fc6d06a8419cfc3ff1a57aa605a963a445]
	I0429 12:35:20.974547 1423526 ssh_runner.go:195] Run: which crictl
	I0429 12:35:20.978067 1423526 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 12:35:20.978164 1423526 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 12:35:21.021482 1423526 cri.go:89] found id: "b187065048c48a70de847611a7bfa1e7d604a1f06e81f41522e2c4623edd5594"
	I0429 12:35:21.021547 1423526 cri.go:89] found id: ""
	I0429 12:35:21.021569 1423526 logs.go:276] 1 containers: [b187065048c48a70de847611a7bfa1e7d604a1f06e81f41522e2c4623edd5594]
	I0429 12:35:21.021656 1423526 ssh_runner.go:195] Run: which crictl
	I0429 12:35:21.025533 1423526 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0429 12:35:21.025622 1423526 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0429 12:35:21.064592 1423526 cri.go:89] found id: "3ab604aa41320d5c9477a97ac38a8892497dd2dbef11c5750c4c98c0fcff96bd"
	I0429 12:35:21.064618 1423526 cri.go:89] found id: "a8ebc8ecfe2f93123a8fd7e1b220084645143d83b644d4ac445a896b7a30c0cb"
	I0429 12:35:21.064623 1423526 cri.go:89] found id: ""
	I0429 12:35:21.064630 1423526 logs.go:276] 2 containers: [3ab604aa41320d5c9477a97ac38a8892497dd2dbef11c5750c4c98c0fcff96bd a8ebc8ecfe2f93123a8fd7e1b220084645143d83b644d4ac445a896b7a30c0cb]
	I0429 12:35:21.064755 1423526 ssh_runner.go:195] Run: which crictl
	I0429 12:35:21.068752 1423526 ssh_runner.go:195] Run: which crictl
	I0429 12:35:21.072335 1423526 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 12:35:21.072428 1423526 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 12:35:21.118125 1423526 cri.go:89] found id: "8e449405f978982d5dbc2fab079a55aed290a3bb6c162ea759272192a3d96fa9"
	I0429 12:35:21.118152 1423526 cri.go:89] found id: ""
	I0429 12:35:21.118171 1423526 logs.go:276] 1 containers: [8e449405f978982d5dbc2fab079a55aed290a3bb6c162ea759272192a3d96fa9]
	I0429 12:35:21.118229 1423526 ssh_runner.go:195] Run: which crictl
	I0429 12:35:21.122208 1423526 logs.go:123] Gathering logs for kindnet [b187065048c48a70de847611a7bfa1e7d604a1f06e81f41522e2c4623edd5594] ...
	I0429 12:35:21.122235 1423526 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b187065048c48a70de847611a7bfa1e7d604a1f06e81f41522e2c4623edd5594"
	I0429 12:35:21.170261 1423526 logs.go:123] Gathering logs for storage-provisioner [3ab604aa41320d5c9477a97ac38a8892497dd2dbef11c5750c4c98c0fcff96bd] ...
	I0429 12:35:21.170290 1423526 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ab604aa41320d5c9477a97ac38a8892497dd2dbef11c5750c4c98c0fcff96bd"
	I0429 12:35:21.217175 1423526 logs.go:123] Gathering logs for CRI-O ...
	I0429 12:35:21.217204 1423526 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 12:35:21.308085 1423526 logs.go:123] Gathering logs for kubelet ...
	I0429 12:35:21.308164 1423526 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0429 12:35:21.361821 1423526 logs.go:138] Found kubelet problem: Apr 29 12:29:55 old-k8s-version-425197 kubelet[751]: E0429 12:29:55.852397     751 reflector.go:138] object-"kube-system"/"kube-proxy-token-fqfww": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-fqfww" is forbidden: User "system:node:old-k8s-version-425197" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-425197' and this object
	W0429 12:35:21.362158 1423526 logs.go:138] Found kubelet problem: Apr 29 12:29:55 old-k8s-version-425197 kubelet[751]: E0429 12:29:55.852571     751 reflector.go:138] object-"default"/"default-token-vdxct": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-vdxct" is forbidden: User "system:node:old-k8s-version-425197" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-425197' and this object
	W0429 12:35:21.362385 1423526 logs.go:138] Found kubelet problem: Apr 29 12:29:55 old-k8s-version-425197 kubelet[751]: E0429 12:29:55.852623     751 reflector.go:138] object-"kube-system"/"storage-provisioner-token-984nr": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-984nr" is forbidden: User "system:node:old-k8s-version-425197" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-425197' and this object
	W0429 12:35:21.362603 1423526 logs.go:138] Found kubelet problem: Apr 29 12:29:55 old-k8s-version-425197 kubelet[751]: E0429 12:29:55.852738     751 reflector.go:138] object-"kube-system"/"metrics-server-token-6n4sr": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-6n4sr" is forbidden: User "system:node:old-k8s-version-425197" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-425197' and this object
	W0429 12:35:21.362806 1423526 logs.go:138] Found kubelet problem: Apr 29 12:29:55 old-k8s-version-425197 kubelet[751]: E0429 12:29:55.852797     751 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-425197" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-425197' and this object
	W0429 12:35:21.363015 1423526 logs.go:138] Found kubelet problem: Apr 29 12:29:55 old-k8s-version-425197 kubelet[751]: E0429 12:29:55.852860     751 reflector.go:138] object-"kube-system"/"coredns-token-qz96c": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-qz96c" is forbidden: User "system:node:old-k8s-version-425197" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-425197' and this object
	W0429 12:35:21.363212 1423526 logs.go:138] Found kubelet problem: Apr 29 12:29:55 old-k8s-version-425197 kubelet[751]: E0429 12:29:55.852912     751 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-425197" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-425197' and this object
	W0429 12:35:21.363420 1423526 logs.go:138] Found kubelet problem: Apr 29 12:29:55 old-k8s-version-425197 kubelet[751]: E0429 12:29:55.852954     751 reflector.go:138] object-"kube-system"/"kindnet-token-5j4j2": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-5j4j2" is forbidden: User "system:node:old-k8s-version-425197" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-425197' and this object
	W0429 12:35:21.373645 1423526 logs.go:138] Found kubelet problem: Apr 29 12:29:57 old-k8s-version-425197 kubelet[751]: E0429 12:29:57.145076     751 pod_workers.go:191] Error syncing pod a940d36c-f69c-494d-a918-07d04aa421aa ("metrics-server-9975d5f86-jgp2v_kube-system(a940d36c-f69c-494d-a918-07d04aa421aa)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0429 12:35:21.373839 1423526 logs.go:138] Found kubelet problem: Apr 29 12:29:57 old-k8s-version-425197 kubelet[751]: E0429 12:29:57.288428     751 pod_workers.go:191] Error syncing pod a940d36c-f69c-494d-a918-07d04aa421aa ("metrics-server-9975d5f86-jgp2v_kube-system(a940d36c-f69c-494d-a918-07d04aa421aa)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0429 12:35:21.379762 1423526 logs.go:138] Found kubelet problem: Apr 29 12:30:12 old-k8s-version-425197 kubelet[751]: E0429 12:30:12.234703     751 pod_workers.go:191] Error syncing pod a940d36c-f69c-494d-a918-07d04aa421aa ("metrics-server-9975d5f86-jgp2v_kube-system(a940d36c-f69c-494d-a918-07d04aa421aa)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0429 12:35:21.381169 1423526 logs.go:138] Found kubelet problem: Apr 29 12:30:18 old-k8s-version-425197 kubelet[751]: E0429 12:30:18.390405     751 pod_workers.go:191] Error syncing pod 82bfd63e-9627-4725-9f84-902d955704d9 ("dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"
	W0429 12:35:21.381495 1423526 logs.go:138] Found kubelet problem: Apr 29 12:30:19 old-k8s-version-425197 kubelet[751]: E0429 12:30:19.392475     751 pod_workers.go:191] Error syncing pod 82bfd63e-9627-4725-9f84-902d955704d9 ("dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"
	W0429 12:35:21.381816 1423526 logs.go:138] Found kubelet problem: Apr 29 12:30:22 old-k8s-version-425197 kubelet[751]: E0429 12:30:22.334859     751 pod_workers.go:191] Error syncing pod 82bfd63e-9627-4725-9f84-902d955704d9 ("dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"
	W0429 12:35:21.381996 1423526 logs.go:138] Found kubelet problem: Apr 29 12:30:26 old-k8s-version-425197 kubelet[751]: E0429 12:30:26.212158     751 pod_workers.go:191] Error syncing pod a940d36c-f69c-494d-a918-07d04aa421aa ("metrics-server-9975d5f86-jgp2v_kube-system(a940d36c-f69c-494d-a918-07d04aa421aa)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0429 12:35:21.382700 1423526 logs.go:138] Found kubelet problem: Apr 29 12:30:36 old-k8s-version-425197 kubelet[751]: E0429 12:30:36.428062     751 pod_workers.go:191] Error syncing pod 82bfd63e-9627-4725-9f84-902d955704d9 ("dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"
	W0429 12:35:21.384732 1423526 logs.go:138] Found kubelet problem: Apr 29 12:30:38 old-k8s-version-425197 kubelet[751]: E0429 12:30:38.220590     751 pod_workers.go:191] Error syncing pod a940d36c-f69c-494d-a918-07d04aa421aa ("metrics-server-9975d5f86-jgp2v_kube-system(a940d36c-f69c-494d-a918-07d04aa421aa)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0429 12:35:21.385083 1423526 logs.go:138] Found kubelet problem: Apr 29 12:30:42 old-k8s-version-425197 kubelet[751]: E0429 12:30:42.334818     751 pod_workers.go:191] Error syncing pod 82bfd63e-9627-4725-9f84-902d955704d9 ("dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"
	W0429 12:35:21.385267 1423526 logs.go:138] Found kubelet problem: Apr 29 12:30:50 old-k8s-version-425197 kubelet[751]: E0429 12:30:50.212076     751 pod_workers.go:191] Error syncing pod a940d36c-f69c-494d-a918-07d04aa421aa ("metrics-server-9975d5f86-jgp2v_kube-system(a940d36c-f69c-494d-a918-07d04aa421aa)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0429 12:35:21.385850 1423526 logs.go:138] Found kubelet problem: Apr 29 12:30:56 old-k8s-version-425197 kubelet[751]: E0429 12:30:56.459489     751 pod_workers.go:191] Error syncing pod 82bfd63e-9627-4725-9f84-902d955704d9 ("dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"
	W0429 12:35:21.386172 1423526 logs.go:138] Found kubelet problem: Apr 29 12:31:02 old-k8s-version-425197 kubelet[751]: E0429 12:31:02.334286     751 pod_workers.go:191] Error syncing pod 82bfd63e-9627-4725-9f84-902d955704d9 ("dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"
	W0429 12:35:21.386354 1423526 logs.go:138] Found kubelet problem: Apr 29 12:31:04 old-k8s-version-425197 kubelet[751]: E0429 12:31:04.211782     751 pod_workers.go:191] Error syncing pod a940d36c-f69c-494d-a918-07d04aa421aa ("metrics-server-9975d5f86-jgp2v_kube-system(a940d36c-f69c-494d-a918-07d04aa421aa)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0429 12:35:21.386678 1423526 logs.go:138] Found kubelet problem: Apr 29 12:31:13 old-k8s-version-425197 kubelet[751]: E0429 12:31:13.212598     751 pod_workers.go:191] Error syncing pod 82bfd63e-9627-4725-9f84-902d955704d9 ("dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"
	W0429 12:35:21.386859 1423526 logs.go:138] Found kubelet problem: Apr 29 12:31:15 old-k8s-version-425197 kubelet[751]: E0429 12:31:15.211861     751 pod_workers.go:191] Error syncing pod a940d36c-f69c-494d-a918-07d04aa421aa ("metrics-server-9975d5f86-jgp2v_kube-system(a940d36c-f69c-494d-a918-07d04aa421aa)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0429 12:35:21.387180 1423526 logs.go:138] Found kubelet problem: Apr 29 12:31:24 old-k8s-version-425197 kubelet[751]: E0429 12:31:24.211151     751 pod_workers.go:191] Error syncing pod 82bfd63e-9627-4725-9f84-902d955704d9 ("dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"
	W0429 12:35:21.389200 1423526 logs.go:138] Found kubelet problem: Apr 29 12:31:27 old-k8s-version-425197 kubelet[751]: E0429 12:31:27.221253     751 pod_workers.go:191] Error syncing pod a940d36c-f69c-494d-a918-07d04aa421aa ("metrics-server-9975d5f86-jgp2v_kube-system(a940d36c-f69c-494d-a918-07d04aa421aa)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0429 12:35:21.389525 1423526 logs.go:138] Found kubelet problem: Apr 29 12:31:36 old-k8s-version-425197 kubelet[751]: E0429 12:31:36.211916     751 pod_workers.go:191] Error syncing pod 82bfd63e-9627-4725-9f84-902d955704d9 ("dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"
	W0429 12:35:21.389715 1423526 logs.go:138] Found kubelet problem: Apr 29 12:31:40 old-k8s-version-425197 kubelet[751]: E0429 12:31:40.211944     751 pod_workers.go:191] Error syncing pod a940d36c-f69c-494d-a918-07d04aa421aa ("metrics-server-9975d5f86-jgp2v_kube-system(a940d36c-f69c-494d-a918-07d04aa421aa)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0429 12:35:21.390300 1423526 logs.go:138] Found kubelet problem: Apr 29 12:31:48 old-k8s-version-425197 kubelet[751]: E0429 12:31:48.542770     751 pod_workers.go:191] Error syncing pod 82bfd63e-9627-4725-9f84-902d955704d9 ("dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"
	W0429 12:35:21.390623 1423526 logs.go:138] Found kubelet problem: Apr 29 12:31:52 old-k8s-version-425197 kubelet[751]: E0429 12:31:52.334115     751 pod_workers.go:191] Error syncing pod 82bfd63e-9627-4725-9f84-902d955704d9 ("dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"
	W0429 12:35:21.390842 1423526 logs.go:138] Found kubelet problem: Apr 29 12:31:53 old-k8s-version-425197 kubelet[751]: E0429 12:31:53.212065     751 pod_workers.go:191] Error syncing pod a940d36c-f69c-494d-a918-07d04aa421aa ("metrics-server-9975d5f86-jgp2v_kube-system(a940d36c-f69c-494d-a918-07d04aa421aa)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0429 12:35:21.391174 1423526 logs.go:138] Found kubelet problem: Apr 29 12:32:07 old-k8s-version-425197 kubelet[751]: E0429 12:32:07.212281     751 pod_workers.go:191] Error syncing pod 82bfd63e-9627-4725-9f84-902d955704d9 ("dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"
	W0429 12:35:21.391357 1423526 logs.go:138] Found kubelet problem: Apr 29 12:32:08 old-k8s-version-425197 kubelet[751]: E0429 12:32:08.212237     751 pod_workers.go:191] Error syncing pod a940d36c-f69c-494d-a918-07d04aa421aa ("metrics-server-9975d5f86-jgp2v_kube-system(a940d36c-f69c-494d-a918-07d04aa421aa)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0429 12:35:21.391690 1423526 logs.go:138] Found kubelet problem: Apr 29 12:32:21 old-k8s-version-425197 kubelet[751]: E0429 12:32:21.211175     751 pod_workers.go:191] Error syncing pod 82bfd63e-9627-4725-9f84-902d955704d9 ("dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"
	W0429 12:35:21.391872 1423526 logs.go:138] Found kubelet problem: Apr 29 12:32:23 old-k8s-version-425197 kubelet[751]: E0429 12:32:23.212280     751 pod_workers.go:191] Error syncing pod a940d36c-f69c-494d-a918-07d04aa421aa ("metrics-server-9975d5f86-jgp2v_kube-system(a940d36c-f69c-494d-a918-07d04aa421aa)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0429 12:35:21.392195 1423526 logs.go:138] Found kubelet problem: Apr 29 12:32:35 old-k8s-version-425197 kubelet[751]: E0429 12:32:35.211140     751 pod_workers.go:191] Error syncing pod 82bfd63e-9627-4725-9f84-902d955704d9 ("dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"
	W0429 12:35:21.392376 1423526 logs.go:138] Found kubelet problem: Apr 29 12:32:36 old-k8s-version-425197 kubelet[751]: E0429 12:32:36.212634     751 pod_workers.go:191] Error syncing pod a940d36c-f69c-494d-a918-07d04aa421aa ("metrics-server-9975d5f86-jgp2v_kube-system(a940d36c-f69c-494d-a918-07d04aa421aa)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0429 12:35:21.392709 1423526 logs.go:138] Found kubelet problem: Apr 29 12:32:47 old-k8s-version-425197 kubelet[751]: E0429 12:32:47.211198     751 pod_workers.go:191] Error syncing pod 82bfd63e-9627-4725-9f84-902d955704d9 ("dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"
	W0429 12:35:21.394727 1423526 logs.go:138] Found kubelet problem: Apr 29 12:32:49 old-k8s-version-425197 kubelet[751]: E0429 12:32:49.219994     751 pod_workers.go:191] Error syncing pod a940d36c-f69c-494d-a918-07d04aa421aa ("metrics-server-9975d5f86-jgp2v_kube-system(a940d36c-f69c-494d-a918-07d04aa421aa)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0429 12:35:21.395052 1423526 logs.go:138] Found kubelet problem: Apr 29 12:33:02 old-k8s-version-425197 kubelet[751]: E0429 12:33:02.211522     751 pod_workers.go:191] Error syncing pod 82bfd63e-9627-4725-9f84-902d955704d9 ("dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"
	W0429 12:35:21.395236 1423526 logs.go:138] Found kubelet problem: Apr 29 12:33:02 old-k8s-version-425197 kubelet[751]: E0429 12:33:02.212607     751 pod_workers.go:191] Error syncing pod a940d36c-f69c-494d-a918-07d04aa421aa ("metrics-server-9975d5f86-jgp2v_kube-system(a940d36c-f69c-494d-a918-07d04aa421aa)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0429 12:35:21.395417 1423526 logs.go:138] Found kubelet problem: Apr 29 12:33:13 old-k8s-version-425197 kubelet[751]: E0429 12:33:13.211645     751 pod_workers.go:191] Error syncing pod a940d36c-f69c-494d-a918-07d04aa421aa ("metrics-server-9975d5f86-jgp2v_kube-system(a940d36c-f69c-494d-a918-07d04aa421aa)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0429 12:35:21.395994 1423526 logs.go:138] Found kubelet problem: Apr 29 12:33:14 old-k8s-version-425197 kubelet[751]: E0429 12:33:14.663840     751 pod_workers.go:191] Error syncing pod 82bfd63e-9627-4725-9f84-902d955704d9 ("dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"
	W0429 12:35:21.396317 1423526 logs.go:138] Found kubelet problem: Apr 29 12:33:22 old-k8s-version-425197 kubelet[751]: E0429 12:33:22.334161     751 pod_workers.go:191] Error syncing pod 82bfd63e-9627-4725-9f84-902d955704d9 ("dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"
	W0429 12:35:21.396497 1423526 logs.go:138] Found kubelet problem: Apr 29 12:33:26 old-k8s-version-425197 kubelet[751]: E0429 12:33:26.211803     751 pod_workers.go:191] Error syncing pod a940d36c-f69c-494d-a918-07d04aa421aa ("metrics-server-9975d5f86-jgp2v_kube-system(a940d36c-f69c-494d-a918-07d04aa421aa)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0429 12:35:21.396827 1423526 logs.go:138] Found kubelet problem: Apr 29 12:33:36 old-k8s-version-425197 kubelet[751]: E0429 12:33:36.211577     751 pod_workers.go:191] Error syncing pod 82bfd63e-9627-4725-9f84-902d955704d9 ("dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"
	W0429 12:35:21.397012 1423526 logs.go:138] Found kubelet problem: Apr 29 12:33:39 old-k8s-version-425197 kubelet[751]: E0429 12:33:39.211633     751 pod_workers.go:191] Error syncing pod a940d36c-f69c-494d-a918-07d04aa421aa ("metrics-server-9975d5f86-jgp2v_kube-system(a940d36c-f69c-494d-a918-07d04aa421aa)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0429 12:35:21.397335 1423526 logs.go:138] Found kubelet problem: Apr 29 12:33:49 old-k8s-version-425197 kubelet[751]: E0429 12:33:49.211245     751 pod_workers.go:191] Error syncing pod 82bfd63e-9627-4725-9f84-902d955704d9 ("dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"
	W0429 12:35:21.397519 1423526 logs.go:138] Found kubelet problem: Apr 29 12:33:51 old-k8s-version-425197 kubelet[751]: E0429 12:33:51.211757     751 pod_workers.go:191] Error syncing pod a940d36c-f69c-494d-a918-07d04aa421aa ("metrics-server-9975d5f86-jgp2v_kube-system(a940d36c-f69c-494d-a918-07d04aa421aa)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0429 12:35:21.397843 1423526 logs.go:138] Found kubelet problem: Apr 29 12:34:01 old-k8s-version-425197 kubelet[751]: E0429 12:34:01.211169     751 pod_workers.go:191] Error syncing pod 82bfd63e-9627-4725-9f84-902d955704d9 ("dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"
	W0429 12:35:21.398026 1423526 logs.go:138] Found kubelet problem: Apr 29 12:34:03 old-k8s-version-425197 kubelet[751]: E0429 12:34:03.211740     751 pod_workers.go:191] Error syncing pod a940d36c-f69c-494d-a918-07d04aa421aa ("metrics-server-9975d5f86-jgp2v_kube-system(a940d36c-f69c-494d-a918-07d04aa421aa)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0429 12:35:21.398348 1423526 logs.go:138] Found kubelet problem: Apr 29 12:34:13 old-k8s-version-425197 kubelet[751]: E0429 12:34:13.211192     751 pod_workers.go:191] Error syncing pod 82bfd63e-9627-4725-9f84-902d955704d9 ("dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"
	W0429 12:35:21.398531 1423526 logs.go:138] Found kubelet problem: Apr 29 12:34:14 old-k8s-version-425197 kubelet[751]: E0429 12:34:14.211820     751 pod_workers.go:191] Error syncing pod a940d36c-f69c-494d-a918-07d04aa421aa ("metrics-server-9975d5f86-jgp2v_kube-system(a940d36c-f69c-494d-a918-07d04aa421aa)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0429 12:35:21.398856 1423526 logs.go:138] Found kubelet problem: Apr 29 12:34:27 old-k8s-version-425197 kubelet[751]: E0429 12:34:27.211244     751 pod_workers.go:191] Error syncing pod 82bfd63e-9627-4725-9f84-902d955704d9 ("dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"
	W0429 12:35:21.399041 1423526 logs.go:138] Found kubelet problem: Apr 29 12:34:27 old-k8s-version-425197 kubelet[751]: E0429 12:34:27.212498     751 pod_workers.go:191] Error syncing pod a940d36c-f69c-494d-a918-07d04aa421aa ("metrics-server-9975d5f86-jgp2v_kube-system(a940d36c-f69c-494d-a918-07d04aa421aa)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0429 12:35:21.399366 1423526 logs.go:138] Found kubelet problem: Apr 29 12:34:38 old-k8s-version-425197 kubelet[751]: E0429 12:34:38.211089     751 pod_workers.go:191] Error syncing pod 82bfd63e-9627-4725-9f84-902d955704d9 ("dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"
	W0429 12:35:21.399548 1423526 logs.go:138] Found kubelet problem: Apr 29 12:34:40 old-k8s-version-425197 kubelet[751]: E0429 12:34:40.211661     751 pod_workers.go:191] Error syncing pod a940d36c-f69c-494d-a918-07d04aa421aa ("metrics-server-9975d5f86-jgp2v_kube-system(a940d36c-f69c-494d-a918-07d04aa421aa)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0429 12:35:21.400111 1423526 logs.go:138] Found kubelet problem: Apr 29 12:34:52 old-k8s-version-425197 kubelet[751]: E0429 12:34:52.211238     751 pod_workers.go:191] Error syncing pod 82bfd63e-9627-4725-9f84-902d955704d9 ("dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"
	W0429 12:35:21.400292 1423526 logs.go:138] Found kubelet problem: Apr 29 12:34:55 old-k8s-version-425197 kubelet[751]: E0429 12:34:55.211712     751 pod_workers.go:191] Error syncing pod a940d36c-f69c-494d-a918-07d04aa421aa ("metrics-server-9975d5f86-jgp2v_kube-system(a940d36c-f69c-494d-a918-07d04aa421aa)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0429 12:35:21.400614 1423526 logs.go:138] Found kubelet problem: Apr 29 12:35:03 old-k8s-version-425197 kubelet[751]: E0429 12:35:03.211147     751 pod_workers.go:191] Error syncing pod 82bfd63e-9627-4725-9f84-902d955704d9 ("dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"
	W0429 12:35:21.400801 1423526 logs.go:138] Found kubelet problem: Apr 29 12:35:07 old-k8s-version-425197 kubelet[751]: E0429 12:35:07.211697     751 pod_workers.go:191] Error syncing pod a940d36c-f69c-494d-a918-07d04aa421aa ("metrics-server-9975d5f86-jgp2v_kube-system(a940d36c-f69c-494d-a918-07d04aa421aa)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0429 12:35:21.401122 1423526 logs.go:138] Found kubelet problem: Apr 29 12:35:14 old-k8s-version-425197 kubelet[751]: E0429 12:35:14.212381     751 pod_workers.go:191] Error syncing pod 82bfd63e-9627-4725-9f84-902d955704d9 ("dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"
	I0429 12:35:21.401133 1423526 logs.go:123] Gathering logs for coredns [e7e73acd6e8014a42c001b36ed0f2d3c63e892feb1fc378fe2fdd2de335d53dc] ...
	I0429 12:35:21.401146 1423526 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7e73acd6e8014a42c001b36ed0f2d3c63e892feb1fc378fe2fdd2de335d53dc"
	I0429 12:35:21.443044 1423526 logs.go:123] Gathering logs for kube-apiserver [f70d0c48d399667aeefaece1ef554897ebeb68eb674e2b89d699347639d7798f] ...
	I0429 12:35:21.443073 1423526 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f70d0c48d399667aeefaece1ef554897ebeb68eb674e2b89d699347639d7798f"
	I0429 12:35:21.532041 1423526 logs.go:123] Gathering logs for etcd [f87e16673fad4748fc3431ca613237d1244ca35397a18f8cc52a6f1a55706439] ...
	I0429 12:35:21.532080 1423526 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f87e16673fad4748fc3431ca613237d1244ca35397a18f8cc52a6f1a55706439"
	I0429 12:35:21.579347 1423526 logs.go:123] Gathering logs for kube-proxy [18f81edeecda84f01c57c2228b6bbacaea535fd1209a9525e4f4eb7e1ccec852] ...
	I0429 12:35:21.579375 1423526 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18f81edeecda84f01c57c2228b6bbacaea535fd1209a9525e4f4eb7e1ccec852"
	I0429 12:35:21.623012 1423526 logs.go:123] Gathering logs for kube-controller-manager [f54a7f3a396ba3db9133a1183e07c7fc6d06a8419cfc3ff1a57aa605a963a445] ...
	I0429 12:35:21.623039 1423526 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f54a7f3a396ba3db9133a1183e07c7fc6d06a8419cfc3ff1a57aa605a963a445"
	I0429 12:35:21.703667 1423526 logs.go:123] Gathering logs for kubernetes-dashboard [8e449405f978982d5dbc2fab079a55aed290a3bb6c162ea759272192a3d96fa9] ...
	I0429 12:35:21.703701 1423526 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e449405f978982d5dbc2fab079a55aed290a3bb6c162ea759272192a3d96fa9"
	I0429 12:35:21.746308 1423526 logs.go:123] Gathering logs for dmesg ...
	I0429 12:35:21.746336 1423526 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 12:35:21.766264 1423526 logs.go:123] Gathering logs for describe nodes ...
	I0429 12:35:21.766402 1423526 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 12:35:21.929075 1423526 logs.go:123] Gathering logs for container status ...
	I0429 12:35:21.929103 1423526 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 12:35:21.978352 1423526 logs.go:123] Gathering logs for kube-scheduler [2d3031452f789fb6a9b7327ac93e3217d5bdc6b6383542150d20d2538540820b] ...
	I0429 12:35:21.978381 1423526 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d3031452f789fb6a9b7327ac93e3217d5bdc6b6383542150d20d2538540820b"
	I0429 12:35:22.027997 1423526 logs.go:123] Gathering logs for storage-provisioner [a8ebc8ecfe2f93123a8fd7e1b220084645143d83b644d4ac445a896b7a30c0cb] ...
	I0429 12:35:22.028025 1423526 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8ebc8ecfe2f93123a8fd7e1b220084645143d83b644d4ac445a896b7a30c0cb"
	I0429 12:35:22.070591 1423526 out.go:304] Setting ErrFile to fd 2...
	I0429 12:35:22.070618 1423526 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0429 12:35:22.070663 1423526 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0429 12:35:22.070676 1423526 out.go:239]   Apr 29 12:34:52 old-k8s-version-425197 kubelet[751]: E0429 12:34:52.211238     751 pod_workers.go:191] Error syncing pod 82bfd63e-9627-4725-9f84-902d955704d9 ("dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"
	  Apr 29 12:34:52 old-k8s-version-425197 kubelet[751]: E0429 12:34:52.211238     751 pod_workers.go:191] Error syncing pod 82bfd63e-9627-4725-9f84-902d955704d9 ("dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"
	W0429 12:35:22.070683 1423526 out.go:239]   Apr 29 12:34:55 old-k8s-version-425197 kubelet[751]: E0429 12:34:55.211712     751 pod_workers.go:191] Error syncing pod a940d36c-f69c-494d-a918-07d04aa421aa ("metrics-server-9975d5f86-jgp2v_kube-system(a940d36c-f69c-494d-a918-07d04aa421aa)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Apr 29 12:34:55 old-k8s-version-425197 kubelet[751]: E0429 12:34:55.211712     751 pod_workers.go:191] Error syncing pod a940d36c-f69c-494d-a918-07d04aa421aa ("metrics-server-9975d5f86-jgp2v_kube-system(a940d36c-f69c-494d-a918-07d04aa421aa)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0429 12:35:22.070691 1423526 out.go:239]   Apr 29 12:35:03 old-k8s-version-425197 kubelet[751]: E0429 12:35:03.211147     751 pod_workers.go:191] Error syncing pod 82bfd63e-9627-4725-9f84-902d955704d9 ("dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"
	  Apr 29 12:35:03 old-k8s-version-425197 kubelet[751]: E0429 12:35:03.211147     751 pod_workers.go:191] Error syncing pod 82bfd63e-9627-4725-9f84-902d955704d9 ("dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"
	W0429 12:35:22.070716 1423526 out.go:239]   Apr 29 12:35:07 old-k8s-version-425197 kubelet[751]: E0429 12:35:07.211697     751 pod_workers.go:191] Error syncing pod a940d36c-f69c-494d-a918-07d04aa421aa ("metrics-server-9975d5f86-jgp2v_kube-system(a940d36c-f69c-494d-a918-07d04aa421aa)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Apr 29 12:35:07 old-k8s-version-425197 kubelet[751]: E0429 12:35:07.211697     751 pod_workers.go:191] Error syncing pod a940d36c-f69c-494d-a918-07d04aa421aa ("metrics-server-9975d5f86-jgp2v_kube-system(a940d36c-f69c-494d-a918-07d04aa421aa)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0429 12:35:22.070725 1423526 out.go:239]   Apr 29 12:35:14 old-k8s-version-425197 kubelet[751]: E0429 12:35:14.212381     751 pod_workers.go:191] Error syncing pod 82bfd63e-9627-4725-9f84-902d955704d9 ("dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"
	  Apr 29 12:35:14 old-k8s-version-425197 kubelet[751]: E0429 12:35:14.212381     751 pod_workers.go:191] Error syncing pod 82bfd63e-9627-4725-9f84-902d955704d9 ("dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"
	I0429 12:35:22.070731 1423526 out.go:304] Setting ErrFile to fd 2...
	I0429 12:35:22.070739 1423526 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:35:32.071134 1423526 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 12:35:32.083403 1423526 api_server.go:72] duration metric: took 5m53.762829916s to wait for apiserver process to appear ...
	I0429 12:35:32.083426 1423526 api_server.go:88] waiting for apiserver healthz status ...
	I0429 12:35:32.083462 1423526 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 12:35:32.083517 1423526 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 12:35:32.128629 1423526 cri.go:89] found id: "f70d0c48d399667aeefaece1ef554897ebeb68eb674e2b89d699347639d7798f"
	I0429 12:35:32.128649 1423526 cri.go:89] found id: ""
	I0429 12:35:32.128656 1423526 logs.go:276] 1 containers: [f70d0c48d399667aeefaece1ef554897ebeb68eb674e2b89d699347639d7798f]
	I0429 12:35:32.128738 1423526 ssh_runner.go:195] Run: which crictl
	I0429 12:35:32.132416 1423526 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 12:35:32.132489 1423526 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 12:35:32.173538 1423526 cri.go:89] found id: "f87e16673fad4748fc3431ca613237d1244ca35397a18f8cc52a6f1a55706439"
	I0429 12:35:32.173558 1423526 cri.go:89] found id: ""
	I0429 12:35:32.173567 1423526 logs.go:276] 1 containers: [f87e16673fad4748fc3431ca613237d1244ca35397a18f8cc52a6f1a55706439]
	I0429 12:35:32.173623 1423526 ssh_runner.go:195] Run: which crictl
	I0429 12:35:32.177288 1423526 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 12:35:32.177361 1423526 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 12:35:32.220046 1423526 cri.go:89] found id: "e7e73acd6e8014a42c001b36ed0f2d3c63e892feb1fc378fe2fdd2de335d53dc"
	I0429 12:35:32.220070 1423526 cri.go:89] found id: ""
	I0429 12:35:32.220078 1423526 logs.go:276] 1 containers: [e7e73acd6e8014a42c001b36ed0f2d3c63e892feb1fc378fe2fdd2de335d53dc]
	I0429 12:35:32.220135 1423526 ssh_runner.go:195] Run: which crictl
	I0429 12:35:32.224358 1423526 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 12:35:32.224428 1423526 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 12:35:32.274913 1423526 cri.go:89] found id: "2d3031452f789fb6a9b7327ac93e3217d5bdc6b6383542150d20d2538540820b"
	I0429 12:35:32.274934 1423526 cri.go:89] found id: ""
	I0429 12:35:32.274942 1423526 logs.go:276] 1 containers: [2d3031452f789fb6a9b7327ac93e3217d5bdc6b6383542150d20d2538540820b]
	I0429 12:35:32.274999 1423526 ssh_runner.go:195] Run: which crictl
	I0429 12:35:32.278544 1423526 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 12:35:32.278624 1423526 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 12:35:32.318854 1423526 cri.go:89] found id: "18f81edeecda84f01c57c2228b6bbacaea535fd1209a9525e4f4eb7e1ccec852"
	I0429 12:35:32.318878 1423526 cri.go:89] found id: ""
	I0429 12:35:32.318886 1423526 logs.go:276] 1 containers: [18f81edeecda84f01c57c2228b6bbacaea535fd1209a9525e4f4eb7e1ccec852]
	I0429 12:35:32.318961 1423526 ssh_runner.go:195] Run: which crictl
	I0429 12:35:32.322884 1423526 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 12:35:32.322980 1423526 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 12:35:32.364795 1423526 cri.go:89] found id: "f54a7f3a396ba3db9133a1183e07c7fc6d06a8419cfc3ff1a57aa605a963a445"
	I0429 12:35:32.364815 1423526 cri.go:89] found id: ""
	I0429 12:35:32.364823 1423526 logs.go:276] 1 containers: [f54a7f3a396ba3db9133a1183e07c7fc6d06a8419cfc3ff1a57aa605a963a445]
	I0429 12:35:32.364884 1423526 ssh_runner.go:195] Run: which crictl
	I0429 12:35:32.368723 1423526 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 12:35:32.368793 1423526 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 12:35:32.408380 1423526 cri.go:89] found id: "b187065048c48a70de847611a7bfa1e7d604a1f06e81f41522e2c4623edd5594"
	I0429 12:35:32.408408 1423526 cri.go:89] found id: ""
	I0429 12:35:32.408416 1423526 logs.go:276] 1 containers: [b187065048c48a70de847611a7bfa1e7d604a1f06e81f41522e2c4623edd5594]
	I0429 12:35:32.408470 1423526 ssh_runner.go:195] Run: which crictl
	I0429 12:35:32.412015 1423526 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 12:35:32.412087 1423526 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 12:35:32.455685 1423526 cri.go:89] found id: "8e449405f978982d5dbc2fab079a55aed290a3bb6c162ea759272192a3d96fa9"
	I0429 12:35:32.455751 1423526 cri.go:89] found id: ""
	I0429 12:35:32.455782 1423526 logs.go:276] 1 containers: [8e449405f978982d5dbc2fab079a55aed290a3bb6c162ea759272192a3d96fa9]
	I0429 12:35:32.455858 1423526 ssh_runner.go:195] Run: which crictl
	I0429 12:35:32.459503 1423526 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0429 12:35:32.459579 1423526 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0429 12:35:32.513099 1423526 cri.go:89] found id: "3ab604aa41320d5c9477a97ac38a8892497dd2dbef11c5750c4c98c0fcff96bd"
	I0429 12:35:32.513120 1423526 cri.go:89] found id: "a8ebc8ecfe2f93123a8fd7e1b220084645143d83b644d4ac445a896b7a30c0cb"
	I0429 12:35:32.513125 1423526 cri.go:89] found id: ""
	I0429 12:35:32.513133 1423526 logs.go:276] 2 containers: [3ab604aa41320d5c9477a97ac38a8892497dd2dbef11c5750c4c98c0fcff96bd a8ebc8ecfe2f93123a8fd7e1b220084645143d83b644d4ac445a896b7a30c0cb]
	I0429 12:35:32.513188 1423526 ssh_runner.go:195] Run: which crictl
	I0429 12:35:32.516756 1423526 ssh_runner.go:195] Run: which crictl
	I0429 12:35:32.521907 1423526 logs.go:123] Gathering logs for kube-apiserver [f70d0c48d399667aeefaece1ef554897ebeb68eb674e2b89d699347639d7798f] ...
	I0429 12:35:32.521946 1423526 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f70d0c48d399667aeefaece1ef554897ebeb68eb674e2b89d699347639d7798f"
	I0429 12:35:32.601911 1423526 logs.go:123] Gathering logs for kube-proxy [18f81edeecda84f01c57c2228b6bbacaea535fd1209a9525e4f4eb7e1ccec852] ...
	I0429 12:35:32.601947 1423526 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18f81edeecda84f01c57c2228b6bbacaea535fd1209a9525e4f4eb7e1ccec852"
	I0429 12:35:32.641448 1423526 logs.go:123] Gathering logs for kubernetes-dashboard [8e449405f978982d5dbc2fab079a55aed290a3bb6c162ea759272192a3d96fa9] ...
	I0429 12:35:32.641478 1423526 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e449405f978982d5dbc2fab079a55aed290a3bb6c162ea759272192a3d96fa9"
	I0429 12:35:32.687584 1423526 logs.go:123] Gathering logs for storage-provisioner [3ab604aa41320d5c9477a97ac38a8892497dd2dbef11c5750c4c98c0fcff96bd] ...
	I0429 12:35:32.687612 1423526 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ab604aa41320d5c9477a97ac38a8892497dd2dbef11c5750c4c98c0fcff96bd"
	I0429 12:35:32.736435 1423526 logs.go:123] Gathering logs for kube-scheduler [2d3031452f789fb6a9b7327ac93e3217d5bdc6b6383542150d20d2538540820b] ...
	I0429 12:35:32.736462 1423526 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d3031452f789fb6a9b7327ac93e3217d5bdc6b6383542150d20d2538540820b"
	I0429 12:35:32.780084 1423526 logs.go:123] Gathering logs for kubelet ...
	I0429 12:35:32.780117 1423526 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0429 12:35:32.833917 1423526 logs.go:138] Found kubelet problem: Apr 29 12:29:55 old-k8s-version-425197 kubelet[751]: E0429 12:29:55.852397     751 reflector.go:138] object-"kube-system"/"kube-proxy-token-fqfww": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-fqfww" is forbidden: User "system:node:old-k8s-version-425197" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-425197' and this object
	W0429 12:35:32.834240 1423526 logs.go:138] Found kubelet problem: Apr 29 12:29:55 old-k8s-version-425197 kubelet[751]: E0429 12:29:55.852571     751 reflector.go:138] object-"default"/"default-token-vdxct": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-vdxct" is forbidden: User "system:node:old-k8s-version-425197" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-425197' and this object
	W0429 12:35:32.834472 1423526 logs.go:138] Found kubelet problem: Apr 29 12:29:55 old-k8s-version-425197 kubelet[751]: E0429 12:29:55.852623     751 reflector.go:138] object-"kube-system"/"storage-provisioner-token-984nr": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-984nr" is forbidden: User "system:node:old-k8s-version-425197" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-425197' and this object
	W0429 12:35:32.834708 1423526 logs.go:138] Found kubelet problem: Apr 29 12:29:55 old-k8s-version-425197 kubelet[751]: E0429 12:29:55.852738     751 reflector.go:138] object-"kube-system"/"metrics-server-token-6n4sr": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-6n4sr" is forbidden: User "system:node:old-k8s-version-425197" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-425197' and this object
	W0429 12:35:32.834931 1423526 logs.go:138] Found kubelet problem: Apr 29 12:29:55 old-k8s-version-425197 kubelet[751]: E0429 12:29:55.852797     751 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-425197" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-425197' and this object
	W0429 12:35:32.835145 1423526 logs.go:138] Found kubelet problem: Apr 29 12:29:55 old-k8s-version-425197 kubelet[751]: E0429 12:29:55.852860     751 reflector.go:138] object-"kube-system"/"coredns-token-qz96c": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-qz96c" is forbidden: User "system:node:old-k8s-version-425197" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-425197' and this object
	W0429 12:35:32.835347 1423526 logs.go:138] Found kubelet problem: Apr 29 12:29:55 old-k8s-version-425197 kubelet[751]: E0429 12:29:55.852912     751 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-425197" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-425197' and this object
	W0429 12:35:32.835555 1423526 logs.go:138] Found kubelet problem: Apr 29 12:29:55 old-k8s-version-425197 kubelet[751]: E0429 12:29:55.852954     751 reflector.go:138] object-"kube-system"/"kindnet-token-5j4j2": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-5j4j2" is forbidden: User "system:node:old-k8s-version-425197" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-425197' and this object
	W0429 12:35:32.846867 1423526 logs.go:138] Found kubelet problem: Apr 29 12:29:57 old-k8s-version-425197 kubelet[751]: E0429 12:29:57.145076     751 pod_workers.go:191] Error syncing pod a940d36c-f69c-494d-a918-07d04aa421aa ("metrics-server-9975d5f86-jgp2v_kube-system(a940d36c-f69c-494d-a918-07d04aa421aa)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0429 12:35:32.847120 1423526 logs.go:138] Found kubelet problem: Apr 29 12:29:57 old-k8s-version-425197 kubelet[751]: E0429 12:29:57.288428     751 pod_workers.go:191] Error syncing pod a940d36c-f69c-494d-a918-07d04aa421aa ("metrics-server-9975d5f86-jgp2v_kube-system(a940d36c-f69c-494d-a918-07d04aa421aa)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0429 12:35:32.853090 1423526 logs.go:138] Found kubelet problem: Apr 29 12:30:12 old-k8s-version-425197 kubelet[751]: E0429 12:30:12.234703     751 pod_workers.go:191] Error syncing pod a940d36c-f69c-494d-a918-07d04aa421aa ("metrics-server-9975d5f86-jgp2v_kube-system(a940d36c-f69c-494d-a918-07d04aa421aa)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0429 12:35:32.854476 1423526 logs.go:138] Found kubelet problem: Apr 29 12:30:18 old-k8s-version-425197 kubelet[751]: E0429 12:30:18.390405     751 pod_workers.go:191] Error syncing pod 82bfd63e-9627-4725-9f84-902d955704d9 ("dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"
	W0429 12:35:32.854803 1423526 logs.go:138] Found kubelet problem: Apr 29 12:30:19 old-k8s-version-425197 kubelet[751]: E0429 12:30:19.392475     751 pod_workers.go:191] Error syncing pod 82bfd63e-9627-4725-9f84-902d955704d9 ("dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"
	W0429 12:35:32.855127 1423526 logs.go:138] Found kubelet problem: Apr 29 12:30:22 old-k8s-version-425197 kubelet[751]: E0429 12:30:22.334859     751 pod_workers.go:191] Error syncing pod 82bfd63e-9627-4725-9f84-902d955704d9 ("dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"
	W0429 12:35:32.855311 1423526 logs.go:138] Found kubelet problem: Apr 29 12:30:26 old-k8s-version-425197 kubelet[751]: E0429 12:30:26.212158     751 pod_workers.go:191] Error syncing pod a940d36c-f69c-494d-a918-07d04aa421aa ("metrics-server-9975d5f86-jgp2v_kube-system(a940d36c-f69c-494d-a918-07d04aa421aa)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0429 12:35:32.856017 1423526 logs.go:138] Found kubelet problem: Apr 29 12:30:36 old-k8s-version-425197 kubelet[751]: E0429 12:30:36.428062     751 pod_workers.go:191] Error syncing pod 82bfd63e-9627-4725-9f84-902d955704d9 ("dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"
	W0429 12:35:32.858043 1423526 logs.go:138] Found kubelet problem: Apr 29 12:30:38 old-k8s-version-425197 kubelet[751]: E0429 12:30:38.220590     751 pod_workers.go:191] Error syncing pod a940d36c-f69c-494d-a918-07d04aa421aa ("metrics-server-9975d5f86-jgp2v_kube-system(a940d36c-f69c-494d-a918-07d04aa421aa)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0429 12:35:32.858368 1423526 logs.go:138] Found kubelet problem: Apr 29 12:30:42 old-k8s-version-425197 kubelet[751]: E0429 12:30:42.334818     751 pod_workers.go:191] Error syncing pod 82bfd63e-9627-4725-9f84-902d955704d9 ("dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"
	W0429 12:35:32.858548 1423526 logs.go:138] Found kubelet problem: Apr 29 12:30:50 old-k8s-version-425197 kubelet[751]: E0429 12:30:50.212076     751 pod_workers.go:191] Error syncing pod a940d36c-f69c-494d-a918-07d04aa421aa ("metrics-server-9975d5f86-jgp2v_kube-system(a940d36c-f69c-494d-a918-07d04aa421aa)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0429 12:35:32.859126 1423526 logs.go:138] Found kubelet problem: Apr 29 12:30:56 old-k8s-version-425197 kubelet[751]: E0429 12:30:56.459489     751 pod_workers.go:191] Error syncing pod 82bfd63e-9627-4725-9f84-902d955704d9 ("dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"
	W0429 12:35:32.859449 1423526 logs.go:138] Found kubelet problem: Apr 29 12:31:02 old-k8s-version-425197 kubelet[751]: E0429 12:31:02.334286     751 pod_workers.go:191] Error syncing pod 82bfd63e-9627-4725-9f84-902d955704d9 ("dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"
	W0429 12:35:32.859630 1423526 logs.go:138] Found kubelet problem: Apr 29 12:31:04 old-k8s-version-425197 kubelet[751]: E0429 12:31:04.211782     751 pod_workers.go:191] Error syncing pod a940d36c-f69c-494d-a918-07d04aa421aa ("metrics-server-9975d5f86-jgp2v_kube-system(a940d36c-f69c-494d-a918-07d04aa421aa)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0429 12:35:32.859953 1423526 logs.go:138] Found kubelet problem: Apr 29 12:31:13 old-k8s-version-425197 kubelet[751]: E0429 12:31:13.212598     751 pod_workers.go:191] Error syncing pod 82bfd63e-9627-4725-9f84-902d955704d9 ("dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"
	W0429 12:35:32.860134 1423526 logs.go:138] Found kubelet problem: Apr 29 12:31:15 old-k8s-version-425197 kubelet[751]: E0429 12:31:15.211861     751 pod_workers.go:191] Error syncing pod a940d36c-f69c-494d-a918-07d04aa421aa ("metrics-server-9975d5f86-jgp2v_kube-system(a940d36c-f69c-494d-a918-07d04aa421aa)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0429 12:35:32.860457 1423526 logs.go:138] Found kubelet problem: Apr 29 12:31:24 old-k8s-version-425197 kubelet[751]: E0429 12:31:24.211151     751 pod_workers.go:191] Error syncing pod 82bfd63e-9627-4725-9f84-902d955704d9 ("dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"
	W0429 12:35:32.862510 1423526 logs.go:138] Found kubelet problem: Apr 29 12:31:27 old-k8s-version-425197 kubelet[751]: E0429 12:31:27.221253     751 pod_workers.go:191] Error syncing pod a940d36c-f69c-494d-a918-07d04aa421aa ("metrics-server-9975d5f86-jgp2v_kube-system(a940d36c-f69c-494d-a918-07d04aa421aa)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0429 12:35:32.862837 1423526 logs.go:138] Found kubelet problem: Apr 29 12:31:36 old-k8s-version-425197 kubelet[751]: E0429 12:31:36.211916     751 pod_workers.go:191] Error syncing pod 82bfd63e-9627-4725-9f84-902d955704d9 ("dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"
	W0429 12:35:32.863018 1423526 logs.go:138] Found kubelet problem: Apr 29 12:31:40 old-k8s-version-425197 kubelet[751]: E0429 12:31:40.211944     751 pod_workers.go:191] Error syncing pod a940d36c-f69c-494d-a918-07d04aa421aa ("metrics-server-9975d5f86-jgp2v_kube-system(a940d36c-f69c-494d-a918-07d04aa421aa)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0429 12:35:32.863598 1423526 logs.go:138] Found kubelet problem: Apr 29 12:31:48 old-k8s-version-425197 kubelet[751]: E0429 12:31:48.542770     751 pod_workers.go:191] Error syncing pod 82bfd63e-9627-4725-9f84-902d955704d9 ("dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"
	W0429 12:35:32.863921 1423526 logs.go:138] Found kubelet problem: Apr 29 12:31:52 old-k8s-version-425197 kubelet[751]: E0429 12:31:52.334115     751 pod_workers.go:191] Error syncing pod 82bfd63e-9627-4725-9f84-902d955704d9 ("dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"
	W0429 12:35:32.864102 1423526 logs.go:138] Found kubelet problem: Apr 29 12:31:53 old-k8s-version-425197 kubelet[751]: E0429 12:31:53.212065     751 pod_workers.go:191] Error syncing pod a940d36c-f69c-494d-a918-07d04aa421aa ("metrics-server-9975d5f86-jgp2v_kube-system(a940d36c-f69c-494d-a918-07d04aa421aa)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0429 12:35:32.864429 1423526 logs.go:138] Found kubelet problem: Apr 29 12:32:07 old-k8s-version-425197 kubelet[751]: E0429 12:32:07.212281     751 pod_workers.go:191] Error syncing pod 82bfd63e-9627-4725-9f84-902d955704d9 ("dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"
	W0429 12:35:32.864613 1423526 logs.go:138] Found kubelet problem: Apr 29 12:32:08 old-k8s-version-425197 kubelet[751]: E0429 12:32:08.212237     751 pod_workers.go:191] Error syncing pod a940d36c-f69c-494d-a918-07d04aa421aa ("metrics-server-9975d5f86-jgp2v_kube-system(a940d36c-f69c-494d-a918-07d04aa421aa)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0429 12:35:32.864941 1423526 logs.go:138] Found kubelet problem: Apr 29 12:32:21 old-k8s-version-425197 kubelet[751]: E0429 12:32:21.211175     751 pod_workers.go:191] Error syncing pod 82bfd63e-9627-4725-9f84-902d955704d9 ("dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"
	W0429 12:35:32.865123 1423526 logs.go:138] Found kubelet problem: Apr 29 12:32:23 old-k8s-version-425197 kubelet[751]: E0429 12:32:23.212280     751 pod_workers.go:191] Error syncing pod a940d36c-f69c-494d-a918-07d04aa421aa ("metrics-server-9975d5f86-jgp2v_kube-system(a940d36c-f69c-494d-a918-07d04aa421aa)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0429 12:35:32.865446 1423526 logs.go:138] Found kubelet problem: Apr 29 12:32:35 old-k8s-version-425197 kubelet[751]: E0429 12:32:35.211140     751 pod_workers.go:191] Error syncing pod 82bfd63e-9627-4725-9f84-902d955704d9 ("dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"
	W0429 12:35:32.865627 1423526 logs.go:138] Found kubelet problem: Apr 29 12:32:36 old-k8s-version-425197 kubelet[751]: E0429 12:32:36.212634     751 pod_workers.go:191] Error syncing pod a940d36c-f69c-494d-a918-07d04aa421aa ("metrics-server-9975d5f86-jgp2v_kube-system(a940d36c-f69c-494d-a918-07d04aa421aa)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0429 12:35:32.865948 1423526 logs.go:138] Found kubelet problem: Apr 29 12:32:47 old-k8s-version-425197 kubelet[751]: E0429 12:32:47.211198     751 pod_workers.go:191] Error syncing pod 82bfd63e-9627-4725-9f84-902d955704d9 ("dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"
	W0429 12:35:32.867960 1423526 logs.go:138] Found kubelet problem: Apr 29 12:32:49 old-k8s-version-425197 kubelet[751]: E0429 12:32:49.219994     751 pod_workers.go:191] Error syncing pod a940d36c-f69c-494d-a918-07d04aa421aa ("metrics-server-9975d5f86-jgp2v_kube-system(a940d36c-f69c-494d-a918-07d04aa421aa)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0429 12:35:32.868282 1423526 logs.go:138] Found kubelet problem: Apr 29 12:33:02 old-k8s-version-425197 kubelet[751]: E0429 12:33:02.211522     751 pod_workers.go:191] Error syncing pod 82bfd63e-9627-4725-9f84-902d955704d9 ("dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"
	W0429 12:35:32.868462 1423526 logs.go:138] Found kubelet problem: Apr 29 12:33:02 old-k8s-version-425197 kubelet[751]: E0429 12:33:02.212607     751 pod_workers.go:191] Error syncing pod a940d36c-f69c-494d-a918-07d04aa421aa ("metrics-server-9975d5f86-jgp2v_kube-system(a940d36c-f69c-494d-a918-07d04aa421aa)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0429 12:35:32.868645 1423526 logs.go:138] Found kubelet problem: Apr 29 12:33:13 old-k8s-version-425197 kubelet[751]: E0429 12:33:13.211645     751 pod_workers.go:191] Error syncing pod a940d36c-f69c-494d-a918-07d04aa421aa ("metrics-server-9975d5f86-jgp2v_kube-system(a940d36c-f69c-494d-a918-07d04aa421aa)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0429 12:35:32.869237 1423526 logs.go:138] Found kubelet problem: Apr 29 12:33:14 old-k8s-version-425197 kubelet[751]: E0429 12:33:14.663840     751 pod_workers.go:191] Error syncing pod 82bfd63e-9627-4725-9f84-902d955704d9 ("dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"
	W0429 12:35:32.869562 1423526 logs.go:138] Found kubelet problem: Apr 29 12:33:22 old-k8s-version-425197 kubelet[751]: E0429 12:33:22.334161     751 pod_workers.go:191] Error syncing pod 82bfd63e-9627-4725-9f84-902d955704d9 ("dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"
	W0429 12:35:32.869745 1423526 logs.go:138] Found kubelet problem: Apr 29 12:33:26 old-k8s-version-425197 kubelet[751]: E0429 12:33:26.211803     751 pod_workers.go:191] Error syncing pod a940d36c-f69c-494d-a918-07d04aa421aa ("metrics-server-9975d5f86-jgp2v_kube-system(a940d36c-f69c-494d-a918-07d04aa421aa)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0429 12:35:32.870066 1423526 logs.go:138] Found kubelet problem: Apr 29 12:33:36 old-k8s-version-425197 kubelet[751]: E0429 12:33:36.211577     751 pod_workers.go:191] Error syncing pod 82bfd63e-9627-4725-9f84-902d955704d9 ("dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"
	W0429 12:35:32.870251 1423526 logs.go:138] Found kubelet problem: Apr 29 12:33:39 old-k8s-version-425197 kubelet[751]: E0429 12:33:39.211633     751 pod_workers.go:191] Error syncing pod a940d36c-f69c-494d-a918-07d04aa421aa ("metrics-server-9975d5f86-jgp2v_kube-system(a940d36c-f69c-494d-a918-07d04aa421aa)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0429 12:35:32.870576 1423526 logs.go:138] Found kubelet problem: Apr 29 12:33:49 old-k8s-version-425197 kubelet[751]: E0429 12:33:49.211245     751 pod_workers.go:191] Error syncing pod 82bfd63e-9627-4725-9f84-902d955704d9 ("dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"
	W0429 12:35:32.870756 1423526 logs.go:138] Found kubelet problem: Apr 29 12:33:51 old-k8s-version-425197 kubelet[751]: E0429 12:33:51.211757     751 pod_workers.go:191] Error syncing pod a940d36c-f69c-494d-a918-07d04aa421aa ("metrics-server-9975d5f86-jgp2v_kube-system(a940d36c-f69c-494d-a918-07d04aa421aa)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0429 12:35:32.871083 1423526 logs.go:138] Found kubelet problem: Apr 29 12:34:01 old-k8s-version-425197 kubelet[751]: E0429 12:34:01.211169     751 pod_workers.go:191] Error syncing pod 82bfd63e-9627-4725-9f84-902d955704d9 ("dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"
	W0429 12:35:32.871263 1423526 logs.go:138] Found kubelet problem: Apr 29 12:34:03 old-k8s-version-425197 kubelet[751]: E0429 12:34:03.211740     751 pod_workers.go:191] Error syncing pod a940d36c-f69c-494d-a918-07d04aa421aa ("metrics-server-9975d5f86-jgp2v_kube-system(a940d36c-f69c-494d-a918-07d04aa421aa)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0429 12:35:32.871584 1423526 logs.go:138] Found kubelet problem: Apr 29 12:34:13 old-k8s-version-425197 kubelet[751]: E0429 12:34:13.211192     751 pod_workers.go:191] Error syncing pod 82bfd63e-9627-4725-9f84-902d955704d9 ("dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"
	W0429 12:35:32.871764 1423526 logs.go:138] Found kubelet problem: Apr 29 12:34:14 old-k8s-version-425197 kubelet[751]: E0429 12:34:14.211820     751 pod_workers.go:191] Error syncing pod a940d36c-f69c-494d-a918-07d04aa421aa ("metrics-server-9975d5f86-jgp2v_kube-system(a940d36c-f69c-494d-a918-07d04aa421aa)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0429 12:35:32.872085 1423526 logs.go:138] Found kubelet problem: Apr 29 12:34:27 old-k8s-version-425197 kubelet[751]: E0429 12:34:27.211244     751 pod_workers.go:191] Error syncing pod 82bfd63e-9627-4725-9f84-902d955704d9 ("dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"
	W0429 12:35:32.872265 1423526 logs.go:138] Found kubelet problem: Apr 29 12:34:27 old-k8s-version-425197 kubelet[751]: E0429 12:34:27.212498     751 pod_workers.go:191] Error syncing pod a940d36c-f69c-494d-a918-07d04aa421aa ("metrics-server-9975d5f86-jgp2v_kube-system(a940d36c-f69c-494d-a918-07d04aa421aa)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0429 12:35:32.872590 1423526 logs.go:138] Found kubelet problem: Apr 29 12:34:38 old-k8s-version-425197 kubelet[751]: E0429 12:34:38.211089     751 pod_workers.go:191] Error syncing pod 82bfd63e-9627-4725-9f84-902d955704d9 ("dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"
	W0429 12:35:32.872777 1423526 logs.go:138] Found kubelet problem: Apr 29 12:34:40 old-k8s-version-425197 kubelet[751]: E0429 12:34:40.211661     751 pod_workers.go:191] Error syncing pod a940d36c-f69c-494d-a918-07d04aa421aa ("metrics-server-9975d5f86-jgp2v_kube-system(a940d36c-f69c-494d-a918-07d04aa421aa)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0429 12:35:32.873346 1423526 logs.go:138] Found kubelet problem: Apr 29 12:34:52 old-k8s-version-425197 kubelet[751]: E0429 12:34:52.211238     751 pod_workers.go:191] Error syncing pod 82bfd63e-9627-4725-9f84-902d955704d9 ("dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"
	W0429 12:35:32.873527 1423526 logs.go:138] Found kubelet problem: Apr 29 12:34:55 old-k8s-version-425197 kubelet[751]: E0429 12:34:55.211712     751 pod_workers.go:191] Error syncing pod a940d36c-f69c-494d-a918-07d04aa421aa ("metrics-server-9975d5f86-jgp2v_kube-system(a940d36c-f69c-494d-a918-07d04aa421aa)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0429 12:35:32.873849 1423526 logs.go:138] Found kubelet problem: Apr 29 12:35:03 old-k8s-version-425197 kubelet[751]: E0429 12:35:03.211147     751 pod_workers.go:191] Error syncing pod 82bfd63e-9627-4725-9f84-902d955704d9 ("dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"
	W0429 12:35:32.874030 1423526 logs.go:138] Found kubelet problem: Apr 29 12:35:07 old-k8s-version-425197 kubelet[751]: E0429 12:35:07.211697     751 pod_workers.go:191] Error syncing pod a940d36c-f69c-494d-a918-07d04aa421aa ("metrics-server-9975d5f86-jgp2v_kube-system(a940d36c-f69c-494d-a918-07d04aa421aa)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0429 12:35:32.874351 1423526 logs.go:138] Found kubelet problem: Apr 29 12:35:14 old-k8s-version-425197 kubelet[751]: E0429 12:35:14.212381     751 pod_workers.go:191] Error syncing pod 82bfd63e-9627-4725-9f84-902d955704d9 ("dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"
	W0429 12:35:32.874534 1423526 logs.go:138] Found kubelet problem: Apr 29 12:35:22 old-k8s-version-425197 kubelet[751]: E0429 12:35:22.212000     751 pod_workers.go:191] Error syncing pod a940d36c-f69c-494d-a918-07d04aa421aa ("metrics-server-9975d5f86-jgp2v_kube-system(a940d36c-f69c-494d-a918-07d04aa421aa)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0429 12:35:32.874857 1423526 logs.go:138] Found kubelet problem: Apr 29 12:35:27 old-k8s-version-425197 kubelet[751]: E0429 12:35:27.211183     751 pod_workers.go:191] Error syncing pod 82bfd63e-9627-4725-9f84-902d955704d9 ("dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"
	I0429 12:35:32.874869 1423526 logs.go:123] Gathering logs for describe nodes ...
	I0429 12:35:32.874882 1423526 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 12:35:33.044639 1423526 logs.go:123] Gathering logs for etcd [f87e16673fad4748fc3431ca613237d1244ca35397a18f8cc52a6f1a55706439] ...
	I0429 12:35:33.044940 1423526 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f87e16673fad4748fc3431ca613237d1244ca35397a18f8cc52a6f1a55706439"
	I0429 12:35:33.091438 1423526 logs.go:123] Gathering logs for coredns [e7e73acd6e8014a42c001b36ed0f2d3c63e892feb1fc378fe2fdd2de335d53dc] ...
	I0429 12:35:33.091470 1423526 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7e73acd6e8014a42c001b36ed0f2d3c63e892feb1fc378fe2fdd2de335d53dc"
	I0429 12:35:33.139453 1423526 logs.go:123] Gathering logs for kube-controller-manager [f54a7f3a396ba3db9133a1183e07c7fc6d06a8419cfc3ff1a57aa605a963a445] ...
	I0429 12:35:33.139479 1423526 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f54a7f3a396ba3db9133a1183e07c7fc6d06a8419cfc3ff1a57aa605a963a445"
	I0429 12:35:33.232982 1423526 logs.go:123] Gathering logs for storage-provisioner [a8ebc8ecfe2f93123a8fd7e1b220084645143d83b644d4ac445a896b7a30c0cb] ...
	I0429 12:35:33.233019 1423526 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8ebc8ecfe2f93123a8fd7e1b220084645143d83b644d4ac445a896b7a30c0cb"
	I0429 12:35:33.280974 1423526 logs.go:123] Gathering logs for dmesg ...
	I0429 12:35:33.281007 1423526 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 12:35:33.300285 1423526 logs.go:123] Gathering logs for kindnet [b187065048c48a70de847611a7bfa1e7d604a1f06e81f41522e2c4623edd5594] ...
	I0429 12:35:33.300316 1423526 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b187065048c48a70de847611a7bfa1e7d604a1f06e81f41522e2c4623edd5594"
	I0429 12:35:33.358564 1423526 logs.go:123] Gathering logs for CRI-O ...
	I0429 12:35:33.358595 1423526 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 12:35:33.442820 1423526 logs.go:123] Gathering logs for container status ...
	I0429 12:35:33.442856 1423526 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 12:35:33.500458 1423526 out.go:304] Setting ErrFile to fd 2...
	I0429 12:35:33.500484 1423526 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0429 12:35:33.500554 1423526 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0429 12:35:33.500569 1423526 out.go:239]   Apr 29 12:35:03 old-k8s-version-425197 kubelet[751]: E0429 12:35:03.211147     751 pod_workers.go:191] Error syncing pod 82bfd63e-9627-4725-9f84-902d955704d9 ("dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"
	  Apr 29 12:35:03 old-k8s-version-425197 kubelet[751]: E0429 12:35:03.211147     751 pod_workers.go:191] Error syncing pod 82bfd63e-9627-4725-9f84-902d955704d9 ("dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"
	W0429 12:35:33.500578 1423526 out.go:239]   Apr 29 12:35:07 old-k8s-version-425197 kubelet[751]: E0429 12:35:07.211697     751 pod_workers.go:191] Error syncing pod a940d36c-f69c-494d-a918-07d04aa421aa ("metrics-server-9975d5f86-jgp2v_kube-system(a940d36c-f69c-494d-a918-07d04aa421aa)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Apr 29 12:35:07 old-k8s-version-425197 kubelet[751]: E0429 12:35:07.211697     751 pod_workers.go:191] Error syncing pod a940d36c-f69c-494d-a918-07d04aa421aa ("metrics-server-9975d5f86-jgp2v_kube-system(a940d36c-f69c-494d-a918-07d04aa421aa)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0429 12:35:33.500593 1423526 out.go:239]   Apr 29 12:35:14 old-k8s-version-425197 kubelet[751]: E0429 12:35:14.212381     751 pod_workers.go:191] Error syncing pod 82bfd63e-9627-4725-9f84-902d955704d9 ("dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"
	  Apr 29 12:35:14 old-k8s-version-425197 kubelet[751]: E0429 12:35:14.212381     751 pod_workers.go:191] Error syncing pod 82bfd63e-9627-4725-9f84-902d955704d9 ("dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"
	W0429 12:35:33.500724 1423526 out.go:239]   Apr 29 12:35:22 old-k8s-version-425197 kubelet[751]: E0429 12:35:22.212000     751 pod_workers.go:191] Error syncing pod a940d36c-f69c-494d-a918-07d04aa421aa ("metrics-server-9975d5f86-jgp2v_kube-system(a940d36c-f69c-494d-a918-07d04aa421aa)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Apr 29 12:35:22 old-k8s-version-425197 kubelet[751]: E0429 12:35:22.212000     751 pod_workers.go:191] Error syncing pod a940d36c-f69c-494d-a918-07d04aa421aa ("metrics-server-9975d5f86-jgp2v_kube-system(a940d36c-f69c-494d-a918-07d04aa421aa)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0429 12:35:33.500755 1423526 out.go:239]   Apr 29 12:35:27 old-k8s-version-425197 kubelet[751]: E0429 12:35:27.211183     751 pod_workers.go:191] Error syncing pod 82bfd63e-9627-4725-9f84-902d955704d9 ("dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"
	  Apr 29 12:35:27 old-k8s-version-425197 kubelet[751]: E0429 12:35:27.211183     751 pod_workers.go:191] Error syncing pod 82bfd63e-9627-4725-9f84-902d955704d9 ("dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"
	I0429 12:35:33.500768 1423526 out.go:304] Setting ErrFile to fd 2...
	I0429 12:35:33.500775 1423526 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:35:43.500976 1423526 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0429 12:35:43.509509 1423526 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0429 12:35:43.513661 1423526 out.go:177] 
	W0429 12:35:43.515996 1423526 out.go:239] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0429 12:35:43.516194 1423526 out.go:239] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	* Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0429 12:35:43.516256 1423526 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	* Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0429 12:35:43.516290 1423526 out.go:239] * 
	* 
	W0429 12:35:43.517800 1423526 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 12:35:43.520235 1423526 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-425197 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-425197
helpers_test.go:235: (dbg) docker inspect old-k8s-version-425197:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9ec77e830d08baef23626f31b5ae4f87186a167e012dfa15a04ef645805d0496",
	        "Created": "2024-04-29T12:26:24.448193532Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1423716,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-04-29T12:29:30.697374372Z",
	            "FinishedAt": "2024-04-29T12:29:26.984017331Z"
	        },
	        "Image": "sha256:c9315e0f61546d7b9630cf89252fa7f614fc966830e816cca5333df5c944376f",
	        "ResolvConfPath": "/var/lib/docker/containers/9ec77e830d08baef23626f31b5ae4f87186a167e012dfa15a04ef645805d0496/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9ec77e830d08baef23626f31b5ae4f87186a167e012dfa15a04ef645805d0496/hostname",
	        "HostsPath": "/var/lib/docker/containers/9ec77e830d08baef23626f31b5ae4f87186a167e012dfa15a04ef645805d0496/hosts",
	        "LogPath": "/var/lib/docker/containers/9ec77e830d08baef23626f31b5ae4f87186a167e012dfa15a04ef645805d0496/9ec77e830d08baef23626f31b5ae4f87186a167e012dfa15a04ef645805d0496-json.log",
	        "Name": "/old-k8s-version-425197",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-425197:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-425197",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/ee73569bdac28ea7031b4edad4e5be5e0cf879ea69ad0c85522fcc8791e1dc9a-init/diff:/var/lib/docker/overlay2/99267fe96688a6fee0a92469b55a9da51d73214dc11fc371bf5149dbc069c731/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ee73569bdac28ea7031b4edad4e5be5e0cf879ea69ad0c85522fcc8791e1dc9a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ee73569bdac28ea7031b4edad4e5be5e0cf879ea69ad0c85522fcc8791e1dc9a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ee73569bdac28ea7031b4edad4e5be5e0cf879ea69ad0c85522fcc8791e1dc9a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-425197",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-425197/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-425197",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-425197",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-425197",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f44b1fac4b9475ac1f84480bb62064d6fe1f3fde5659564d50a834fc42cbb766",
	            "SandboxKey": "/var/run/docker/netns/f44b1fac4b94",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34568"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34567"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34564"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34566"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34565"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-425197": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "NetworkID": "e418dfcfe5850004b1f96ecc2546d5e471265b8fb16f3850524fc3d1f67e1a4f",
	                    "EndpointID": "972df37417458f117ef22f3cda8218ab8d6762c44f3fc83e6246aeeadef4258a",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-425197",
	                        "9ec77e830d08"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-425197 -n old-k8s-version-425197
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-425197 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-425197 logs -n 25: (2.61788016s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p kubernetes-upgrade-475015                           | kubernetes-upgrade-475015 | jenkins | v1.33.0 | 29 Apr 24 12:25 UTC | 29 Apr 24 12:25 UTC |
	| start   | -p cert-expiration-059027                              | cert-expiration-059027    | jenkins | v1.33.0 | 29 Apr 24 12:25 UTC | 29 Apr 24 12:25 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                                   |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-841567                            | force-systemd-env-841567  | jenkins | v1.33.0 | 29 Apr 24 12:25 UTC | 29 Apr 24 12:25 UTC |
	| start   | -p cert-options-002271                                 | cert-options-002271       | jenkins | v1.33.0 | 29 Apr 24 12:25 UTC | 29 Apr 24 12:26 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost                            |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                                  |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	| ssh     | cert-options-002271 ssh                                | cert-options-002271       | jenkins | v1.33.0 | 29 Apr 24 12:26 UTC | 29 Apr 24 12:26 UTC |
	|         | openssl x509 -text -noout -in                          |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                           |         |         |                     |                     |
	| ssh     | -p cert-options-002271 -- sudo                         | cert-options-002271       | jenkins | v1.33.0 | 29 Apr 24 12:26 UTC | 29 Apr 24 12:26 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                           |         |         |                     |                     |
	| delete  | -p cert-options-002271                                 | cert-options-002271       | jenkins | v1.33.0 | 29 Apr 24 12:26 UTC | 29 Apr 24 12:26 UTC |
	| start   | -p old-k8s-version-425197                              | old-k8s-version-425197    | jenkins | v1.33.0 | 29 Apr 24 12:26 UTC | 29 Apr 24 12:29 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --kvm-network=default                                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                           |         |         |                     |                     |
	|         | --keep-context=false                                   |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                           |         |         |                     |                     |
	| start   | -p cert-expiration-059027                              | cert-expiration-059027    | jenkins | v1.33.0 | 29 Apr 24 12:28 UTC | 29 Apr 24 12:29 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-059027                              | cert-expiration-059027    | jenkins | v1.33.0 | 29 Apr 24 12:29 UTC | 29 Apr 24 12:29 UTC |
	| start   | -p no-preload-880190                                   | no-preload-880190         | jenkins | v1.33.0 | 29 Apr 24 12:29 UTC | 29 Apr 24 12:30 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-425197        | old-k8s-version-425197    | jenkins | v1.33.0 | 29 Apr 24 12:29 UTC | 29 Apr 24 12:29 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| stop    | -p old-k8s-version-425197                              | old-k8s-version-425197    | jenkins | v1.33.0 | 29 Apr 24 12:29 UTC | 29 Apr 24 12:29 UTC |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-425197             | old-k8s-version-425197    | jenkins | v1.33.0 | 29 Apr 24 12:29 UTC | 29 Apr 24 12:29 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p old-k8s-version-425197                              | old-k8s-version-425197    | jenkins | v1.33.0 | 29 Apr 24 12:29 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --kvm-network=default                                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                           |         |         |                     |                     |
	|         | --keep-context=false                                   |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-880190             | no-preload-880190         | jenkins | v1.33.0 | 29 Apr 24 12:30 UTC | 29 Apr 24 12:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| stop    | -p no-preload-880190                                   | no-preload-880190         | jenkins | v1.33.0 | 29 Apr 24 12:30 UTC | 29 Apr 24 12:30 UTC |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-880190                  | no-preload-880190         | jenkins | v1.33.0 | 29 Apr 24 12:30 UTC | 29 Apr 24 12:30 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p no-preload-880190                                   | no-preload-880190         | jenkins | v1.33.0 | 29 Apr 24 12:30 UTC | 29 Apr 24 12:35 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                           |         |         |                     |                     |
	| image   | no-preload-880190 image list                           | no-preload-880190         | jenkins | v1.33.0 | 29 Apr 24 12:35 UTC | 29 Apr 24 12:35 UTC |
	|         | --format=json                                          |                           |         |         |                     |                     |
	| pause   | -p no-preload-880190                                   | no-preload-880190         | jenkins | v1.33.0 | 29 Apr 24 12:35 UTC | 29 Apr 24 12:35 UTC |
	|         | --alsologtostderr -v=1                                 |                           |         |         |                     |                     |
	| unpause | -p no-preload-880190                                   | no-preload-880190         | jenkins | v1.33.0 | 29 Apr 24 12:35 UTC | 29 Apr 24 12:35 UTC |
	|         | --alsologtostderr -v=1                                 |                           |         |         |                     |                     |
	| delete  | -p no-preload-880190                                   | no-preload-880190         | jenkins | v1.33.0 | 29 Apr 24 12:35 UTC | 29 Apr 24 12:35 UTC |
	| delete  | -p no-preload-880190                                   | no-preload-880190         | jenkins | v1.33.0 | 29 Apr 24 12:35 UTC | 29 Apr 24 12:35 UTC |
	| start   | -p embed-certs-635140                                  | embed-certs-635140        | jenkins | v1.33.0 | 29 Apr 24 12:35 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                           |         |         |                     |                     |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 12:35:41
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 12:35:41.072935 1432272 out.go:291] Setting OutFile to fd 1 ...
	I0429 12:35:41.073101 1432272 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:35:41.073111 1432272 out.go:304] Setting ErrFile to fd 2...
	I0429 12:35:41.073116 1432272 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:35:41.073657 1432272 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18756-1231546/.minikube/bin
	I0429 12:35:41.074252 1432272 out.go:298] Setting JSON to false
	I0429 12:35:41.075353 1432272 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":29885,"bootTime":1714364256,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0429 12:35:41.075487 1432272 start.go:139] virtualization:  
	I0429 12:35:41.078490 1432272 out.go:177] * [embed-certs-635140] minikube v1.33.0 on Ubuntu 20.04 (arm64)
	I0429 12:35:41.080756 1432272 out.go:177]   - MINIKUBE_LOCATION=18756
	I0429 12:35:41.080871 1432272 notify.go:220] Checking for updates...
	I0429 12:35:41.082916 1432272 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 12:35:41.085023 1432272 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18756-1231546/kubeconfig
	I0429 12:35:41.087251 1432272 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18756-1231546/.minikube
	I0429 12:35:41.089465 1432272 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0429 12:35:41.091150 1432272 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 12:35:41.093405 1432272 config.go:182] Loaded profile config "old-k8s-version-425197": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0429 12:35:41.093515 1432272 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 12:35:41.114343 1432272 docker.go:122] docker version: linux-26.1.0:Docker Engine - Community
	I0429 12:35:41.114464 1432272 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0429 12:35:41.185336 1432272 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:52 SystemTime:2024-04-29 12:35:41.175793374 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.1.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0429 12:35:41.185456 1432272 docker.go:295] overlay module found
	I0429 12:35:41.189501 1432272 out.go:177] * Using the docker driver based on user configuration
	I0429 12:35:41.191526 1432272 start.go:297] selected driver: docker
	I0429 12:35:41.191541 1432272 start.go:901] validating driver "docker" against <nil>
	I0429 12:35:41.191555 1432272 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 12:35:41.192198 1432272 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0429 12:35:41.252148 1432272 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:52 SystemTime:2024-04-29 12:35:41.241918516 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.1.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0429 12:35:41.252317 1432272 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 12:35:41.252560 1432272 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 12:35:41.254626 1432272 out.go:177] * Using Docker driver with root privileges
	I0429 12:35:41.256281 1432272 cni.go:84] Creating CNI manager for ""
	I0429 12:35:41.256305 1432272 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0429 12:35:41.256316 1432272 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0429 12:35:41.256407 1432272 start.go:340] cluster config:
	{Name:embed-certs-635140 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:embed-certs-635140 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 12:35:41.258308 1432272 out.go:177] * Starting "embed-certs-635140" primary control-plane node in "embed-certs-635140" cluster
	I0429 12:35:41.260269 1432272 cache.go:121] Beginning downloading kic base image for docker with crio
	I0429 12:35:41.262240 1432272 out.go:177] * Pulling base image v0.0.43-1713736339-18706 ...
	I0429 12:35:41.264066 1432272 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 12:35:41.264121 1432272 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18756-1231546/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-arm64.tar.lz4
	I0429 12:35:41.264133 1432272 cache.go:56] Caching tarball of preloaded images
	I0429 12:35:41.264163 1432272 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon
	I0429 12:35:41.264211 1432272 preload.go:173] Found /home/jenkins/minikube-integration/18756-1231546/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0429 12:35:41.264222 1432272 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0429 12:35:41.264331 1432272 profile.go:143] Saving config to /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/embed-certs-635140/config.json ...
	I0429 12:35:41.264347 1432272 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/embed-certs-635140/config.json: {Name:mk4501ff0c6b04ac0cb9b119b874a2bab0f68fcb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:35:41.281093 1432272 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon, skipping pull
	I0429 12:35:41.281117 1432272 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e exists in daemon, skipping load
	I0429 12:35:41.281138 1432272 cache.go:194] Successfully downloaded all kic artifacts
	I0429 12:35:41.281165 1432272 start.go:360] acquireMachinesLock for embed-certs-635140: {Name:mk793719d761fc16f39f9e28387e2f7e1e8ff535 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 12:35:41.281739 1432272 start.go:364] duration metric: took 548.288µs to acquireMachinesLock for "embed-certs-635140"
	I0429 12:35:41.281775 1432272 start.go:93] Provisioning new machine with config: &{Name:embed-certs-635140 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:embed-certs-635140 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 12:35:41.281870 1432272 start.go:125] createHost starting for "" (driver="docker")
	I0429 12:35:43.500976 1423526 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0429 12:35:43.509509 1423526 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0429 12:35:43.513661 1423526 out.go:177] 
	W0429 12:35:43.515996 1423526 out.go:239] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0429 12:35:43.516194 1423526 out.go:239] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0429 12:35:43.516256 1423526 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0429 12:35:43.516290 1423526 out.go:239] * 
	W0429 12:35:43.517800 1423526 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 12:35:43.520235 1423526 out.go:177] 
	
	
	==> CRI-O <==
	Apr 29 12:33:26 old-k8s-version-425197 crio[639]: time="2024-04-29 12:33:26.211617657Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=e3edc651-3a67-49a6-8def-948e086838a2 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 29 12:33:39 old-k8s-version-425197 crio[639]: time="2024-04-29 12:33:39.211185653Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=c176bf51-72f9-4a65-8fa5-8af05770587e name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 29 12:33:39 old-k8s-version-425197 crio[639]: time="2024-04-29 12:33:39.211425529Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=c176bf51-72f9-4a65-8fa5-8af05770587e name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 29 12:33:51 old-k8s-version-425197 crio[639]: time="2024-04-29 12:33:51.211168072Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=e357d9fd-ad3f-4ff0-a735-0a5b4ef54129 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 29 12:33:51 old-k8s-version-425197 crio[639]: time="2024-04-29 12:33:51.211405633Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=e357d9fd-ad3f-4ff0-a735-0a5b4ef54129 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 29 12:34:03 old-k8s-version-425197 crio[639]: time="2024-04-29 12:34:03.211273218Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=cdd1a439-4a8e-4c97-b528-db49c978b4c6 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 29 12:34:03 old-k8s-version-425197 crio[639]: time="2024-04-29 12:34:03.211513307Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=cdd1a439-4a8e-4c97-b528-db49c978b4c6 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 29 12:34:14 old-k8s-version-425197 crio[639]: time="2024-04-29 12:34:14.211095828Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=78d8643e-ff14-448d-b623-8f2483178011 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 29 12:34:14 old-k8s-version-425197 crio[639]: time="2024-04-29 12:34:14.211338609Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=78d8643e-ff14-448d-b623-8f2483178011 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 29 12:34:27 old-k8s-version-425197 crio[639]: time="2024-04-29 12:34:27.212035159Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=2ef950bf-75ba-4a40-a37f-07461395f0b3 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 29 12:34:27 old-k8s-version-425197 crio[639]: time="2024-04-29 12:34:27.212277447Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=2ef950bf-75ba-4a40-a37f-07461395f0b3 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 29 12:34:40 old-k8s-version-425197 crio[639]: time="2024-04-29 12:34:40.211244813Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=39fa64af-e86d-4842-a103-a98489fc9ad2 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 29 12:34:40 old-k8s-version-425197 crio[639]: time="2024-04-29 12:34:40.211492287Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=39fa64af-e86d-4842-a103-a98489fc9ad2 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 29 12:34:46 old-k8s-version-425197 crio[639]: time="2024-04-29 12:34:46.244286496Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=11f86f62-d3b8-4631-981b-67a5712ea1f7 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 29 12:34:46 old-k8s-version-425197 crio[639]: time="2024-04-29 12:34:46.244541847Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c,RepoTags:[k8s.gcr.io/pause:3.2 registry.k8s.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:31d3efd12022ffeffb3146bc10ae8beb890c80ed2f07363515580add7ed47636 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f registry.k8s.io/pause@sha256:31d3efd12022ffeffb3146bc10ae8beb890c80ed2f07363515580add7ed47636 registry.k8s.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:489397,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=11f86f62-d3b8-4631-981b-67a5712ea1f7 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 29 12:34:55 old-k8s-version-425197 crio[639]: time="2024-04-29 12:34:55.211260011Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=106f5c43-1e84-403e-9477-d02a77d55d90 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 29 12:34:55 old-k8s-version-425197 crio[639]: time="2024-04-29 12:34:55.211496408Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=106f5c43-1e84-403e-9477-d02a77d55d90 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 29 12:35:07 old-k8s-version-425197 crio[639]: time="2024-04-29 12:35:07.211241414Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=e8ce47a8-a801-44c6-a797-643ffb7ec839 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 29 12:35:07 old-k8s-version-425197 crio[639]: time="2024-04-29 12:35:07.211475464Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=e8ce47a8-a801-44c6-a797-643ffb7ec839 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 29 12:35:22 old-k8s-version-425197 crio[639]: time="2024-04-29 12:35:22.211452691Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=a9b4a140-b832-4264-8535-605ccdbb7458 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 29 12:35:22 old-k8s-version-425197 crio[639]: time="2024-04-29 12:35:22.211693920Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=a9b4a140-b832-4264-8535-605ccdbb7458 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 29 12:35:37 old-k8s-version-425197 crio[639]: time="2024-04-29 12:35:37.211196647Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=5d038382-48e1-4240-8b46-0e4c6f163c96 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 29 12:35:37 old-k8s-version-425197 crio[639]: time="2024-04-29 12:35:37.211441569Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=5d038382-48e1-4240-8b46-0e4c6f163c96 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 29 12:35:37 old-k8s-version-425197 crio[639]: time="2024-04-29 12:35:37.212647092Z" level=info msg="Pulling image: fake.domain/registry.k8s.io/echoserver:1.4" id=c9aa4dad-b99c-46f0-8b6d-8d41ba24f022 name=/runtime.v1alpha2.ImageService/PullImage
	Apr 29 12:35:37 old-k8s-version-425197 crio[639]: time="2024-04-29 12:35:37.228142521Z" level=info msg="Trying to access \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	cee29e4ac7387       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           2 minutes ago       Exited              dashboard-metrics-scraper   5                   d38b5b146fb9d       dashboard-metrics-scraper-8d5bb5db8-xlb6n
	3ab604aa41320       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           5 minutes ago       Running             storage-provisioner         1                   8b337480a7787       storage-provisioner
	8e449405f9789       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93   5 minutes ago       Running             kubernetes-dashboard        0                   5dc8c2e1af8a3       kubernetes-dashboard-cd95d586-jjpdf
	18f81edeecda8       25a5233254979d0678a2db1d15b76b73dc380d81bc5eed93916ba5638b3cd894                                           5 minutes ago       Running             kube-proxy                  0                   6e3d520a7f018       kube-proxy-5hlxl
	e7e73acd6e801       db91994f4ee8f894a1e8a6c1a76f615da8fc3c019300a3686291ce6fcbc57895                                           5 minutes ago       Running             coredns                     0                   ee91dff2888ee       coredns-74ff55c5b-hzh92
	b187065048c48       4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d                                           5 minutes ago       Running             kindnet-cni                 0                   b4184ed33bfb8       kindnet-7lslc
	a8ebc8ecfe2f9       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           5 minutes ago       Exited              storage-provisioner         0                   8b337480a7787       storage-provisioner
	4fd0a65f52ff1       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           5 minutes ago       Running             busybox                     0                   c29e9ca3f4778       busybox
	f54a7f3a396ba       1df8a2b116bd16f7070fd383a6769c8d644b365575e8ffa3e492b84e4f05fc74                                           5 minutes ago       Running             kube-controller-manager     0                   d64e525e538fb       kube-controller-manager-old-k8s-version-425197
	2d3031452f789       e7605f88f17d6a4c3f083ef9c6f5f19b39f87e4d4406a05a8612b54a6ea57051                                           5 minutes ago       Running             kube-scheduler              0                   c8fe2fe4fda91       kube-scheduler-old-k8s-version-425197
	f87e16673fad4       05b738aa1bc6355db8a2ee8639f3631b908286e43f584a3d2ee0c472de033c28                                           5 minutes ago       Running             etcd                        0                   e5dc54ba5a240       etcd-old-k8s-version-425197
	f70d0c48d3996       2c08bbbc02d3aa5dfbf4e79f15c0a61424049288917aa10364464ca1f7de7157                                           5 minutes ago       Running             kube-apiserver              0                   b4736b572dded       kube-apiserver-old-k8s-version-425197
	
	
	==> coredns [e7e73acd6e8014a42c001b36ed0f2d3c63e892feb1fc378fe2fdd2de335d53dc] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 093a0bf1423dd8c4eee62372bb216168
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:51940 - 16008 "HINFO IN 1501846682157261104.1390484488655506091. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.031190276s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 093a0bf1423dd8c4eee62372bb216168
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:41529 - 41350 "HINFO IN 8450241710363776983.7337920661522141015. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012867136s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	I0429 12:30:28.745277       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-04-29 12:29:58.744544356 +0000 UTC m=+0.026676743) (total time: 30.000620091s):
	Trace[2019727887]: [30.000620091s] [30.000620091s] END
	E0429 12:30:28.745306       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0429 12:30:28.745713       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-04-29 12:29:58.745404306 +0000 UTC m=+0.027536684) (total time: 30.000282806s):
	Trace[939984059]: [30.000282806s] [30.000282806s] END
	E0429 12:30:28.745728       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0429 12:30:28.745933       1 trace.go:116] Trace[911902081]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-04-29 12:29:58.745649662 +0000 UTC m=+0.027782041) (total time: 30.000260209s):
	Trace[911902081]: [30.000260209s] [30.000260209s] END
	E0429 12:30:28.745992       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-425197
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-425197
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8dbdf0965c47dd73171cfc89e1a9c75505f7a22d
	                    minikube.k8s.io/name=old-k8s-version-425197
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_29T12_27_11_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 12:27:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-425197
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 12:35:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 12:30:46 +0000   Mon, 29 Apr 2024 12:27:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 12:30:46 +0000   Mon, 29 Apr 2024 12:27:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 12:30:46 +0000   Mon, 29 Apr 2024 12:27:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 12:30:46 +0000   Mon, 29 Apr 2024 12:28:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-425197
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	System Info:
	  Machine ID:                 02ab2fd0610d4d41a8b1f9fa3f181fbb
	  System UUID:                6e562de7-3b34-4283-bcb8-f96f0e4b2c83
	  Boot ID:                    b8f2360a-0b19-4e04-aa8c-604719eae8f1
	  Kernel Version:             5.15.0-1058-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m42s
	  kube-system                 coredns-74ff55c5b-hzh92                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     8m19s
	  kube-system                 etcd-old-k8s-version-425197                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         8m26s
	  kube-system                 kindnet-7lslc                                     100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      8m19s
	  kube-system                 kube-apiserver-old-k8s-version-425197             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m26s
	  kube-system                 kube-controller-manager-old-k8s-version-425197    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m26s
	  kube-system                 kube-proxy-5hlxl                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m19s
	  kube-system                 kube-scheduler-old-k8s-version-425197             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m26s
	  kube-system                 metrics-server-9975d5f86-jgp2v                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (2%!)(MISSING)       0 (0%!)(MISSING)         6m31s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m18s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-xlb6n         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m34s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-jjpdf               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%!)(MISSING)  100m (5%!)(MISSING)
	  memory             420Mi (5%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  100Mi (0%!)(MISSING)  0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  8m46s (x5 over 8m46s)  kubelet     Node old-k8s-version-425197 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m46s (x5 over 8m46s)  kubelet     Node old-k8s-version-425197 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m46s (x4 over 8m46s)  kubelet     Node old-k8s-version-425197 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m27s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m27s                  kubelet     Node old-k8s-version-425197 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m27s                  kubelet     Node old-k8s-version-425197 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m27s                  kubelet     Node old-k8s-version-425197 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m18s                  kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                7m36s                  kubelet     Node old-k8s-version-425197 status is now: NodeReady
	  Normal  Starting                 5m59s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m59s (x8 over 5m59s)  kubelet     Node old-k8s-version-425197 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m59s (x8 over 5m59s)  kubelet     Node old-k8s-version-425197 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m59s (x8 over 5m59s)  kubelet     Node old-k8s-version-425197 status is now: NodeHasSufficientPID
	  Normal  Starting                 5m45s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[  +0.001000] FS-Cache: N-cookie d=00000000a8cdaa2f{9p.inode} n=00000000e2172674
	[  +0.001140] FS-Cache: N-key=[8] 'e5405c0100000000'
	[  +0.004916] FS-Cache: Duplicate cookie detected
	[  +0.000791] FS-Cache: O-cookie c=0000004e [p=0000004b fl=226 nc=0 na=1]
	[  +0.001003] FS-Cache: O-cookie d=00000000a8cdaa2f{9p.inode} n=00000000ec0b8a4d
	[  +0.001195] FS-Cache: O-key=[8] 'e5405c0100000000'
	[  +0.000745] FS-Cache: N-cookie c=00000055 [p=0000004b fl=2 nc=0 na=1]
	[  +0.001015] FS-Cache: N-cookie d=00000000a8cdaa2f{9p.inode} n=00000000b703c03b
	[  +0.001048] FS-Cache: N-key=[8] 'e5405c0100000000'
	[  +2.246469] FS-Cache: Duplicate cookie detected
	[  +0.000831] FS-Cache: O-cookie c=0000004c [p=0000004b fl=226 nc=0 na=1]
	[  +0.001017] FS-Cache: O-cookie d=00000000a8cdaa2f{9p.inode} n=00000000bb4b91ea
	[  +0.001149] FS-Cache: O-key=[8] 'e4405c0100000000'
	[  +0.000878] FS-Cache: N-cookie c=00000057 [p=0000004b fl=2 nc=0 na=1]
	[  +0.001015] FS-Cache: N-cookie d=00000000a8cdaa2f{9p.inode} n=000000008e2f8fe5
	[  +0.001216] FS-Cache: N-key=[8] 'e4405c0100000000'
	[  +0.409874] FS-Cache: Duplicate cookie detected
	[  +0.000810] FS-Cache: O-cookie c=00000051 [p=0000004b fl=226 nc=0 na=1]
	[  +0.000962] FS-Cache: O-cookie d=00000000a8cdaa2f{9p.inode} n=000000008545b5a7
	[  +0.001083] FS-Cache: O-key=[8] 'ea405c0100000000'
	[  +0.000751] FS-Cache: N-cookie c=00000058 [p=0000004b fl=2 nc=0 na=1]
	[  +0.001137] FS-Cache: N-cookie d=00000000a8cdaa2f{9p.inode} n=00000000804f7cf5
	[  +0.001117] FS-Cache: N-key=[8] 'ea405c0100000000'
	[Apr29 12:22] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	[  +0.139125] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	
	
	==> etcd [f87e16673fad4748fc3431ca613237d1244ca35397a18f8cc52a6f1a55706439] <==
	2024-04-29 12:31:39.223534 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-29 12:31:49.223507 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-29 12:31:59.223579 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-29 12:32:09.223449 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-29 12:32:19.223728 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-29 12:32:29.223635 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-29 12:32:39.223539 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-29 12:32:49.223522 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-29 12:32:59.223586 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-29 12:33:09.223550 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-29 12:33:19.223659 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-29 12:33:29.223543 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-29 12:33:39.223520 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-29 12:33:49.223564 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-29 12:33:59.223495 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-29 12:34:09.223740 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-29 12:34:19.223549 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-29 12:34:29.223564 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-29 12:34:39.223593 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-29 12:34:49.223537 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-29 12:34:59.223535 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-29 12:35:09.223562 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-29 12:35:19.223547 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-29 12:35:29.223502 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-29 12:35:39.223759 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 12:35:45 up  8:18,  0 users,  load average: 1.09, 1.47, 1.91
	Linux old-k8s-version-425197 5.15.0-1058-aws #64~20.04.1-Ubuntu SMP Tue Apr 9 11:11:55 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [b187065048c48a70de847611a7bfa1e7d604a1f06e81f41522e2c4623edd5594] <==
	I0429 12:33:39.300514       1 main.go:227] handling current node
	I0429 12:33:49.318163       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0429 12:33:49.318193       1 main.go:227] handling current node
	I0429 12:33:59.327442       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0429 12:33:59.327472       1 main.go:227] handling current node
	I0429 12:34:09.343776       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0429 12:34:09.343883       1 main.go:227] handling current node
	I0429 12:34:19.360566       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0429 12:34:19.360595       1 main.go:227] handling current node
	I0429 12:34:29.368977       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0429 12:34:29.369005       1 main.go:227] handling current node
	I0429 12:34:39.384169       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0429 12:34:39.384195       1 main.go:227] handling current node
	I0429 12:34:49.401859       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0429 12:34:49.401886       1 main.go:227] handling current node
	I0429 12:34:59.412641       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0429 12:34:59.412768       1 main.go:227] handling current node
	I0429 12:35:09.425418       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0429 12:35:09.425450       1 main.go:227] handling current node
	I0429 12:35:19.432096       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0429 12:35:19.432124       1 main.go:227] handling current node
	I0429 12:35:29.444714       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0429 12:35:29.444742       1 main.go:227] handling current node
	I0429 12:35:39.448300       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0429 12:35:39.448334       1 main.go:227] handling current node
	
	
	==> kube-apiserver [f70d0c48d399667aeefaece1ef554897ebeb68eb674e2b89d699347639d7798f] <==
	I0429 12:32:34.673584       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0429 12:32:34.673592       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0429 12:32:59.109694       1 handler_proxy.go:102] no RequestInfo found in the context
	E0429 12:32:59.109769       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0429 12:32:59.109777       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0429 12:33:11.953891       1 client.go:360] parsed scheme: "passthrough"
	I0429 12:33:11.953935       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0429 12:33:11.953943       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0429 12:33:50.508423       1 client.go:360] parsed scheme: "passthrough"
	I0429 12:33:50.508465       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0429 12:33:50.508473       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0429 12:34:25.509463       1 client.go:360] parsed scheme: "passthrough"
	I0429 12:34:25.509519       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0429 12:34:25.509531       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0429 12:34:57.203936       1 handler_proxy.go:102] no RequestInfo found in the context
	E0429 12:34:57.204002       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0429 12:34:57.204010       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0429 12:35:01.731158       1 client.go:360] parsed scheme: "passthrough"
	I0429 12:35:01.731278       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0429 12:35:01.731312       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0429 12:35:36.958347       1 client.go:360] parsed scheme: "passthrough"
	I0429 12:35:36.958395       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0429 12:35:36.958404       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [f54a7f3a396ba3db9133a1183e07c7fc6d06a8419cfc3ff1a57aa605a963a445] <==
	W0429 12:31:19.856036       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0429 12:31:43.504517       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0429 12:31:51.506452       1 request.go:655] Throttling request took 1.048342084s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0429 12:31:52.359039       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0429 12:32:14.009347       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0429 12:32:24.009573       1 request.go:655] Throttling request took 1.048384213s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0429 12:32:24.860910       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0429 12:32:44.511111       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0429 12:32:56.511259       1 request.go:655] Throttling request took 1.04830499s, request: GET:https://192.168.85.2:8443/apis/autoscaling/v1?timeout=32s
	W0429 12:32:57.362700       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0429 12:33:15.014636       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0429 12:33:29.013121       1 request.go:655] Throttling request took 1.048444456s, request: GET:https://192.168.85.2:8443/apis/autoscaling/v1?timeout=32s
	W0429 12:33:29.864930       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0429 12:33:45.526382       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0429 12:34:01.515414       1 request.go:655] Throttling request took 1.048314575s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0429 12:34:02.366801       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0429 12:34:16.028339       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0429 12:34:34.017207       1 request.go:655] Throttling request took 1.048381083s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0429 12:34:34.868597       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0429 12:34:46.530290       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0429 12:35:06.519066       1 request.go:655] Throttling request took 1.048336649s, request: GET:https://192.168.85.2:8443/apis/coordination.k8s.io/v1beta1?timeout=32s
	W0429 12:35:07.371615       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0429 12:35:17.032224       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0429 12:35:39.022058       1 request.go:655] Throttling request took 1.048189423s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0429 12:35:39.873491       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	
	==> kube-proxy [18f81edeecda84f01c57c2228b6bbacaea535fd1209a9525e4f4eb7e1ccec852] <==
	I0429 12:27:27.544645       1 node.go:172] Successfully retrieved node IP: 192.168.85.2
	I0429 12:27:27.544945       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
	W0429 12:27:27.573050       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0429 12:27:27.573215       1 server_others.go:185] Using iptables Proxier.
	I0429 12:27:27.573533       1 server.go:650] Version: v1.20.0
	I0429 12:27:27.577527       1 config.go:315] Starting service config controller
	I0429 12:27:27.577547       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0429 12:27:27.577817       1 config.go:224] Starting endpoint slice config controller
	I0429 12:27:27.577831       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0429 12:27:27.679476       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0429 12:27:27.679715       1 shared_informer.go:247] Caches are synced for service config 
	I0429 12:29:59.992552       1 node.go:172] Successfully retrieved node IP: 192.168.85.2
	I0429 12:29:59.992635       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
	W0429 12:30:00.156265       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0429 12:30:00.156536       1 server_others.go:185] Using iptables Proxier.
	I0429 12:30:00.156905       1 server.go:650] Version: v1.20.0
	I0429 12:30:00.158007       1 config.go:315] Starting service config controller
	I0429 12:30:00.158145       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0429 12:30:00.158208       1 config.go:224] Starting endpoint slice config controller
	I0429 12:30:00.158242       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0429 12:30:00.361357       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0429 12:30:00.361733       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-scheduler [2d3031452f789fb6a9b7327ac93e3217d5bdc6b6383542150d20d2538540820b] <==
	E0429 12:27:07.417420       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0429 12:27:07.417548       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0429 12:27:07.417635       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0429 12:27:07.431480       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0429 12:27:07.431595       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0429 12:27:07.431701       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0429 12:27:07.431792       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0429 12:27:07.431893       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0429 12:27:07.433009       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0429 12:27:08.267619       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0429 12:27:08.332489       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0429 12:27:08.360306       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0429 12:27:08.396804       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0429 12:27:08.396904       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0429 12:27:09.004938       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	I0429 12:29:52.392630       1 serving.go:331] Generated self-signed cert in-memory
	W0429 12:29:56.197858       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0429 12:29:56.197911       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0429 12:29:56.197927       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0429 12:29:56.197933       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0429 12:29:56.442614       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0429 12:29:56.442650       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0429 12:29:56.442812       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0429 12:29:56.451210       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0429 12:29:56.642837       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Apr 29 12:34:14 old-k8s-version-425197 kubelet[751]: E0429 12:34:14.211820     751 pod_workers.go:191] Error syncing pod a940d36c-f69c-494d-a918-07d04aa421aa ("metrics-server-9975d5f86-jgp2v_kube-system(a940d36c-f69c-494d-a918-07d04aa421aa)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 29 12:34:27 old-k8s-version-425197 kubelet[751]: I0429 12:34:27.210936     751 scope.go:95] [topologymanager] RemoveContainer - Container ID: cee29e4ac7387e8c123612f3ca5f03d8916a985a35e561c6bdbd2376b5b3030a
	Apr 29 12:34:27 old-k8s-version-425197 kubelet[751]: E0429 12:34:27.211244     751 pod_workers.go:191] Error syncing pod 82bfd63e-9627-4725-9f84-902d955704d9 ("dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"
	Apr 29 12:34:27 old-k8s-version-425197 kubelet[751]: E0429 12:34:27.212498     751 pod_workers.go:191] Error syncing pod a940d36c-f69c-494d-a918-07d04aa421aa ("metrics-server-9975d5f86-jgp2v_kube-system(a940d36c-f69c-494d-a918-07d04aa421aa)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 29 12:34:38 old-k8s-version-425197 kubelet[751]: I0429 12:34:38.210759     751 scope.go:95] [topologymanager] RemoveContainer - Container ID: cee29e4ac7387e8c123612f3ca5f03d8916a985a35e561c6bdbd2376b5b3030a
	Apr 29 12:34:38 old-k8s-version-425197 kubelet[751]: E0429 12:34:38.211089     751 pod_workers.go:191] Error syncing pod 82bfd63e-9627-4725-9f84-902d955704d9 ("dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"
	Apr 29 12:34:40 old-k8s-version-425197 kubelet[751]: E0429 12:34:40.211661     751 pod_workers.go:191] Error syncing pod a940d36c-f69c-494d-a918-07d04aa421aa ("metrics-server-9975d5f86-jgp2v_kube-system(a940d36c-f69c-494d-a918-07d04aa421aa)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 29 12:34:46 old-k8s-version-425197 kubelet[751]: E0429 12:34:46.253196     751 container_manager_linux.go:533] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /docker/9ec77e830d08baef23626f31b5ae4f87186a167e012dfa15a04ef645805d0496, memory: /docker/9ec77e830d08baef23626f31b5ae4f87186a167e012dfa15a04ef645805d0496/system.slice/kubelet.service
	Apr 29 12:34:52 old-k8s-version-425197 kubelet[751]: I0429 12:34:52.210864     751 scope.go:95] [topologymanager] RemoveContainer - Container ID: cee29e4ac7387e8c123612f3ca5f03d8916a985a35e561c6bdbd2376b5b3030a
	Apr 29 12:34:52 old-k8s-version-425197 kubelet[751]: E0429 12:34:52.211238     751 pod_workers.go:191] Error syncing pod 82bfd63e-9627-4725-9f84-902d955704d9 ("dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"
	Apr 29 12:34:55 old-k8s-version-425197 kubelet[751]: E0429 12:34:55.211712     751 pod_workers.go:191] Error syncing pod a940d36c-f69c-494d-a918-07d04aa421aa ("metrics-server-9975d5f86-jgp2v_kube-system(a940d36c-f69c-494d-a918-07d04aa421aa)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 29 12:35:03 old-k8s-version-425197 kubelet[751]: I0429 12:35:03.210825     751 scope.go:95] [topologymanager] RemoveContainer - Container ID: cee29e4ac7387e8c123612f3ca5f03d8916a985a35e561c6bdbd2376b5b3030a
	Apr 29 12:35:03 old-k8s-version-425197 kubelet[751]: E0429 12:35:03.211147     751 pod_workers.go:191] Error syncing pod 82bfd63e-9627-4725-9f84-902d955704d9 ("dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"
	Apr 29 12:35:07 old-k8s-version-425197 kubelet[751]: E0429 12:35:07.211697     751 pod_workers.go:191] Error syncing pod a940d36c-f69c-494d-a918-07d04aa421aa ("metrics-server-9975d5f86-jgp2v_kube-system(a940d36c-f69c-494d-a918-07d04aa421aa)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 29 12:35:14 old-k8s-version-425197 kubelet[751]: I0429 12:35:14.210876     751 scope.go:95] [topologymanager] RemoveContainer - Container ID: cee29e4ac7387e8c123612f3ca5f03d8916a985a35e561c6bdbd2376b5b3030a
	Apr 29 12:35:14 old-k8s-version-425197 kubelet[751]: E0429 12:35:14.212381     751 pod_workers.go:191] Error syncing pod 82bfd63e-9627-4725-9f84-902d955704d9 ("dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"
	Apr 29 12:35:22 old-k8s-version-425197 kubelet[751]: E0429 12:35:22.212000     751 pod_workers.go:191] Error syncing pod a940d36c-f69c-494d-a918-07d04aa421aa ("metrics-server-9975d5f86-jgp2v_kube-system(a940d36c-f69c-494d-a918-07d04aa421aa)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 29 12:35:27 old-k8s-version-425197 kubelet[751]: I0429 12:35:27.210820     751 scope.go:95] [topologymanager] RemoveContainer - Container ID: cee29e4ac7387e8c123612f3ca5f03d8916a985a35e561c6bdbd2376b5b3030a
	Apr 29 12:35:27 old-k8s-version-425197 kubelet[751]: E0429 12:35:27.211183     751 pod_workers.go:191] Error syncing pod 82bfd63e-9627-4725-9f84-902d955704d9 ("dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"
	Apr 29 12:35:37 old-k8s-version-425197 kubelet[751]: E0429 12:35:37.233956     751 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Apr 29 12:35:37 old-k8s-version-425197 kubelet[751]: E0429 12:35:37.234005     751 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Apr 29 12:35:37 old-k8s-version-425197 kubelet[751]: E0429 12:35:37.234135     751 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-6n4sr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exec
:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-jgp2v_kube-system(a940d36
c-f69c-494d-a918-07d04aa421aa): ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Apr 29 12:35:37 old-k8s-version-425197 kubelet[751]: E0429 12:35:37.234163     751 pod_workers.go:191] Error syncing pod a940d36c-f69c-494d-a918-07d04aa421aa ("metrics-server-9975d5f86-jgp2v_kube-system(a940d36c-f69c-494d-a918-07d04aa421aa)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Apr 29 12:35:41 old-k8s-version-425197 kubelet[751]: I0429 12:35:41.210824     751 scope.go:95] [topologymanager] RemoveContainer - Container ID: cee29e4ac7387e8c123612f3ca5f03d8916a985a35e561c6bdbd2376b5b3030a
	Apr 29 12:35:41 old-k8s-version-425197 kubelet[751]: E0429 12:35:41.211171     751 pod_workers.go:191] Error syncing pod 82bfd63e-9627-4725-9f84-902d955704d9 ("dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xlb6n_kubernetes-dashboard(82bfd63e-9627-4725-9f84-902d955704d9)"
	
	
	==> kubernetes-dashboard [8e449405f978982d5dbc2fab079a55aed290a3bb6c162ea759272192a3d96fa9] <==
	2024/04/29 12:30:22 Starting overwatch
	2024/04/29 12:30:22 Using namespace: kubernetes-dashboard
	2024/04/29 12:30:22 Using in-cluster config to connect to apiserver
	2024/04/29 12:30:22 Using secret token for csrf signing
	2024/04/29 12:30:22 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/04/29 12:30:22 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/04/29 12:30:22 Successful initial request to the apiserver, version: v1.20.0
	2024/04/29 12:30:22 Generating JWE encryption key
	2024/04/29 12:30:22 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/04/29 12:30:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/04/29 12:30:23 Initializing JWE encryption key from synchronized object
	2024/04/29 12:30:23 Creating in-cluster Sidecar client
	2024/04/29 12:30:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/04/29 12:30:23 Serving insecurely on HTTP port: 9090
	2024/04/29 12:30:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/04/29 12:31:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/04/29 12:31:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/04/29 12:32:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/04/29 12:32:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/04/29 12:33:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/04/29 12:33:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/04/29 12:34:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/04/29 12:34:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/04/29 12:35:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [3ab604aa41320d5c9477a97ac38a8892497dd2dbef11c5750c4c98c0fcff96bd] <==
	I0429 12:30:28.505155       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0429 12:30:28.524996       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0429 12:30:28.525057       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0429 12:30:45.969222       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0429 12:30:45.969384       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-425197_77b23491-8dd7-4cb7-8dda-d621344baa5c!
	I0429 12:30:45.970152       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"255b8207-e6f9-461c-b07c-9060f36e5434", APIVersion:"v1", ResourceVersion:"808", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-425197_77b23491-8dd7-4cb7-8dda-d621344baa5c became leader
	I0429 12:30:46.070141       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-425197_77b23491-8dd7-4cb7-8dda-d621344baa5c!
	
	
	==> storage-provisioner [a8ebc8ecfe2f93123a8fd7e1b220084645143d83b644d4ac445a896b7a30c0cb] <==
	I0429 12:28:14.517185       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0429 12:28:14.538669       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0429 12:28:14.538808       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0429 12:28:14.569918       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0429 12:28:14.571179       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"255b8207-e6f9-461c-b07c-9060f36e5434", APIVersion:"v1", ResourceVersion:"489", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-425197_ab236131-b432-405f-a1bd-27a023a3a669 became leader
	I0429 12:28:14.573257       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-425197_ab236131-b432-405f-a1bd-27a023a3a669!
	I0429 12:28:14.674404       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-425197_ab236131-b432-405f-a1bd-27a023a3a669!
	E0429 12:29:15.573136       1 leaderelection.go:361] Failed to update lock: Put "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": http2: server sent GOAWAY and closed the connection; LastStreamID=143, ErrCode=NO_ERROR, debug=""
	I0429 12:29:58.072978       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0429 12:30:28.075671       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-425197 -n old-k8s-version-425197
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-425197 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-jgp2v
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-425197 describe pod metrics-server-9975d5f86-jgp2v
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-425197 describe pod metrics-server-9975d5f86-jgp2v: exit status 1 (118.702078ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-9975d5f86-jgp2v" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-425197 describe pod metrics-server-9975d5f86-jgp2v: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (377.37s)

                                                
                                    

Test pass (295/327)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 8.51
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.08
9 TestDownloadOnly/v1.20.0/DeleteAll 0.22
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.30.0/json-events 7.37
13 TestDownloadOnly/v1.30.0/preload-exists 0
17 TestDownloadOnly/v1.30.0/LogsDuration 0.09
18 TestDownloadOnly/v1.30.0/DeleteAll 0.21
19 TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.57
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.09
27 TestAddons/Setup 205.7
29 TestAddons/parallel/Registry 18.47
31 TestAddons/parallel/InspektorGadget 10.73
35 TestAddons/parallel/CSI 51.67
36 TestAddons/parallel/Headlamp 13.06
37 TestAddons/parallel/CloudSpanner 6.89
38 TestAddons/parallel/LocalPath 8.38
39 TestAddons/parallel/NvidiaDevicePlugin 5.75
40 TestAddons/parallel/Yakd 6
43 TestAddons/serial/GCPAuth/Namespaces 0.16
44 TestAddons/StoppedEnableDisable 12.23
45 TestCertOptions 33.89
46 TestCertExpiration 239.74
48 TestForceSystemdFlag 38.29
49 TestForceSystemdEnv 40.69
55 TestErrorSpam/setup 34.37
56 TestErrorSpam/start 0.71
57 TestErrorSpam/status 1.33
58 TestErrorSpam/pause 1.78
59 TestErrorSpam/unpause 1.81
60 TestErrorSpam/stop 1.48
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 75.15
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 20.81
67 TestFunctional/serial/KubeContext 0.06
68 TestFunctional/serial/KubectlGetPods 0.09
71 TestFunctional/serial/CacheCmd/cache/add_remote 3.45
72 TestFunctional/serial/CacheCmd/cache/add_local 1.08
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
74 TestFunctional/serial/CacheCmd/cache/list 0.06
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
76 TestFunctional/serial/CacheCmd/cache/cache_reload 1.89
77 TestFunctional/serial/CacheCmd/cache/delete 0.14
78 TestFunctional/serial/MinikubeKubectlCmd 0.14
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.15
80 TestFunctional/serial/ExtraConfig 41.74
81 TestFunctional/serial/ComponentHealth 0.1
82 TestFunctional/serial/LogsCmd 1.68
83 TestFunctional/serial/LogsFileCmd 1.7
84 TestFunctional/serial/InvalidService 4.46
86 TestFunctional/parallel/ConfigCmd 0.54
87 TestFunctional/parallel/DashboardCmd 10.1
88 TestFunctional/parallel/DryRun 0.63
89 TestFunctional/parallel/InternationalLanguage 0.26
90 TestFunctional/parallel/StatusCmd 1.05
94 TestFunctional/parallel/ServiceCmdConnect 10.62
95 TestFunctional/parallel/AddonsCmd 0.17
96 TestFunctional/parallel/PersistentVolumeClaim 24.76
98 TestFunctional/parallel/SSHCmd 0.68
99 TestFunctional/parallel/CpCmd 2.4
101 TestFunctional/parallel/FileSync 0.62
102 TestFunctional/parallel/CertSync 2.18
106 TestFunctional/parallel/NodeLabels 0.14
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.73
110 TestFunctional/parallel/License 0.33
112 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.64
113 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
115 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.49
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.14
117 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
121 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
122 TestFunctional/parallel/ServiceCmd/DeployApp 7.21
123 TestFunctional/parallel/ProfileCmd/profile_not_create 0.41
124 TestFunctional/parallel/ProfileCmd/profile_list 0.4
125 TestFunctional/parallel/ProfileCmd/profile_json_output 0.38
126 TestFunctional/parallel/MountCmd/any-port 7.09
127 TestFunctional/parallel/ServiceCmd/List 0.5
128 TestFunctional/parallel/ServiceCmd/JSONOutput 0.52
129 TestFunctional/parallel/ServiceCmd/HTTPS 0.41
130 TestFunctional/parallel/ServiceCmd/Format 0.48
131 TestFunctional/parallel/ServiceCmd/URL 0.4
132 TestFunctional/parallel/MountCmd/specific-port 2.18
133 TestFunctional/parallel/MountCmd/VerifyCleanup 1.37
134 TestFunctional/parallel/Version/short 0.08
135 TestFunctional/parallel/Version/components 1.33
136 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
137 TestFunctional/parallel/ImageCommands/ImageListTable 0.31
138 TestFunctional/parallel/ImageCommands/ImageListJson 0.37
139 TestFunctional/parallel/ImageCommands/ImageListYaml 0.3
140 TestFunctional/parallel/ImageCommands/ImageBuild 2.81
141 TestFunctional/parallel/ImageCommands/Setup 2.49
142 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 6.25
143 TestFunctional/parallel/UpdateContextCmd/no_changes 0.21
144 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.25
145 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.21
146 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.15
147 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.47
148 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.87
149 TestFunctional/parallel/ImageCommands/ImageRemove 0.58
150 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.29
151 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.93
152 TestFunctional/delete_addon-resizer_images 0.08
153 TestFunctional/delete_my-image_image 0.01
154 TestFunctional/delete_minikube_cached_images 0.01
158 TestMultiControlPlane/serial/StartCluster 162.35
159 TestMultiControlPlane/serial/DeployApp 9.1
160 TestMultiControlPlane/serial/PingHostFromPods 1.71
161 TestMultiControlPlane/serial/AddWorkerNode 27.68
162 TestMultiControlPlane/serial/NodeLabels 0.11
163 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.74
164 TestMultiControlPlane/serial/CopyFile 18.81
165 TestMultiControlPlane/serial/StopSecondaryNode 12.69
166 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.57
167 TestMultiControlPlane/serial/RestartSecondaryNode 23.8
168 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 5.8
169 TestMultiControlPlane/serial/RestartClusterKeepsNodes 177.63
170 TestMultiControlPlane/serial/DeleteSecondaryNode 13.06
171 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.51
172 TestMultiControlPlane/serial/StopCluster 35.82
173 TestMultiControlPlane/serial/RestartCluster 132.65
174 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.54
175 TestMultiControlPlane/serial/AddSecondaryNode 58.36
176 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.76
180 TestJSONOutput/start/Command 75.14
181 TestJSONOutput/start/Audit 0
183 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/pause/Command 0.77
187 TestJSONOutput/pause/Audit 0
189 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/unpause/Command 0.69
193 TestJSONOutput/unpause/Audit 0
195 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/stop/Command 5.85
199 TestJSONOutput/stop/Audit 0
201 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
203 TestErrorJSONOutput 0.23
205 TestKicCustomNetwork/create_custom_network 43.03
206 TestKicCustomNetwork/use_default_bridge_network 36.99
207 TestKicExistingNetwork 32.24
208 TestKicCustomSubnet 33.05
209 TestKicStaticIP 36.92
210 TestMainNoArgs 0.07
211 TestMinikubeProfile 71.6
214 TestMountStart/serial/StartWithMountFirst 9.25
215 TestMountStart/serial/VerifyMountFirst 0.27
216 TestMountStart/serial/StartWithMountSecond 7.03
217 TestMountStart/serial/VerifyMountSecond 0.27
218 TestMountStart/serial/DeleteFirst 1.62
219 TestMountStart/serial/VerifyMountPostDelete 0.27
220 TestMountStart/serial/Stop 1.2
221 TestMountStart/serial/RestartStopped 8.02
222 TestMountStart/serial/VerifyMountPostStop 0.27
225 TestMultiNode/serial/FreshStart2Nodes 92.21
226 TestMultiNode/serial/DeployApp2Nodes 5.05
227 TestMultiNode/serial/PingHostFrom2Pods 1.08
228 TestMultiNode/serial/AddNode 46.77
229 TestMultiNode/serial/MultiNodeLabels 0.09
230 TestMultiNode/serial/ProfileList 0.34
231 TestMultiNode/serial/CopyFile 10.33
232 TestMultiNode/serial/StopNode 2.25
233 TestMultiNode/serial/StartAfterStop 9.73
234 TestMultiNode/serial/RestartKeepsNodes 85.51
235 TestMultiNode/serial/DeleteNode 5.22
236 TestMultiNode/serial/StopMultiNode 23.87
237 TestMultiNode/serial/RestartMultiNode 55.28
238 TestMultiNode/serial/ValidateNameConflict 33.34
243 TestPreload 156.18
245 TestScheduledStopUnix 110.48
248 TestInsufficientStorage 10.4
249 TestRunningBinaryUpgrade 67.53
251 TestKubernetesUpgrade 390.06
252 TestMissingContainerUpgrade 153.62
254 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
255 TestNoKubernetes/serial/StartWithK8s 39.95
256 TestNoKubernetes/serial/StartWithStopK8s 8.74
257 TestNoKubernetes/serial/Start 8.76
258 TestNoKubernetes/serial/VerifyK8sNotRunning 0.35
259 TestNoKubernetes/serial/ProfileList 1.1
260 TestNoKubernetes/serial/Stop 1.23
261 TestNoKubernetes/serial/StartNoArgs 7.88
262 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.38
263 TestStoppedBinaryUpgrade/Setup 1.34
264 TestStoppedBinaryUpgrade/Upgrade 68.48
265 TestStoppedBinaryUpgrade/MinikubeLogs 1.12
274 TestPause/serial/Start 81.36
275 TestPause/serial/SecondStartNoReconfiguration 20.31
276 TestPause/serial/Pause 0.98
277 TestPause/serial/VerifyStatus 0.4
278 TestPause/serial/Unpause 0.86
279 TestPause/serial/PauseAgain 1.13
280 TestPause/serial/DeletePaused 2.9
281 TestPause/serial/VerifyDeletedResources 0.4
289 TestNetworkPlugins/group/false 5.58
294 TestStartStop/group/old-k8s-version/serial/FirstStart 167.26
295 TestStartStop/group/old-k8s-version/serial/DeployApp 10.13
297 TestStartStop/group/no-preload/serial/FirstStart 65.41
298 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.81
299 TestStartStop/group/old-k8s-version/serial/Stop 14.62
300 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.29
302 TestStartStop/group/no-preload/serial/DeployApp 9.45
303 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.66
304 TestStartStop/group/no-preload/serial/Stop 12.15
305 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
306 TestStartStop/group/no-preload/serial/SecondStart 288.96
307 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
308 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
309 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
310 TestStartStop/group/no-preload/serial/Pause 3.17
312 TestStartStop/group/embed-certs/serial/FirstStart 83.36
313 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
314 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.12
315 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.3
316 TestStartStop/group/old-k8s-version/serial/Pause 3.85
318 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 83.43
319 TestStartStop/group/embed-certs/serial/DeployApp 9.36
320 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.12
321 TestStartStop/group/embed-certs/serial/Stop 11.98
322 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
323 TestStartStop/group/embed-certs/serial/SecondStart 302.82
324 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.4
325 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.78
326 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.28
327 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
328 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 307.32
329 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
330 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
331 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
332 TestStartStop/group/embed-certs/serial/Pause 3.16
334 TestStartStop/group/newest-cni/serial/FirstStart 44.58
335 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
336 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.11
337 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.38
338 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.94
339 TestNetworkPlugins/group/auto/Start 88.78
340 TestStartStop/group/newest-cni/serial/DeployApp 0
341 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.6
342 TestStartStop/group/newest-cni/serial/Stop 1.31
343 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.28
344 TestStartStop/group/newest-cni/serial/SecondStart 21.61
345 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
346 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
347 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.43
348 TestStartStop/group/newest-cni/serial/Pause 3.93
349 TestNetworkPlugins/group/kindnet/Start 51.11
350 TestNetworkPlugins/group/auto/KubeletFlags 0.36
351 TestNetworkPlugins/group/auto/NetCatPod 10.37
352 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
353 TestNetworkPlugins/group/kindnet/KubeletFlags 0.39
354 TestNetworkPlugins/group/auto/DNS 0.26
355 TestNetworkPlugins/group/auto/Localhost 0.3
356 TestNetworkPlugins/group/kindnet/NetCatPod 10.37
357 TestNetworkPlugins/group/auto/HairPin 0.23
358 TestNetworkPlugins/group/kindnet/DNS 0.29
359 TestNetworkPlugins/group/kindnet/Localhost 0.25
360 TestNetworkPlugins/group/kindnet/HairPin 0.21
361 TestNetworkPlugins/group/flannel/Start 72.51
362 TestNetworkPlugins/group/enable-default-cni/Start 89.25
363 TestNetworkPlugins/group/flannel/ControllerPod 6.01
364 TestNetworkPlugins/group/flannel/KubeletFlags 0.28
365 TestNetworkPlugins/group/flannel/NetCatPod 12.28
366 TestNetworkPlugins/group/flannel/DNS 0.22
367 TestNetworkPlugins/group/flannel/Localhost 0.17
368 TestNetworkPlugins/group/flannel/HairPin 0.15
369 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.39
370 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.36
371 TestNetworkPlugins/group/bridge/Start 91.95
372 TestNetworkPlugins/group/enable-default-cni/DNS 0.23
373 TestNetworkPlugins/group/enable-default-cni/Localhost 0.2
374 TestNetworkPlugins/group/enable-default-cni/HairPin 0.21
375 TestNetworkPlugins/group/calico/Start 75.27
376 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
377 TestNetworkPlugins/group/bridge/NetCatPod 11.3
378 TestNetworkPlugins/group/calico/ControllerPod 6.01
379 TestNetworkPlugins/group/bridge/DNS 0.2
380 TestNetworkPlugins/group/bridge/Localhost 0.16
381 TestNetworkPlugins/group/bridge/HairPin 0.16
382 TestNetworkPlugins/group/calico/KubeletFlags 0.29
383 TestNetworkPlugins/group/calico/NetCatPod 12.27
384 TestNetworkPlugins/group/calico/DNS 0.29
385 TestNetworkPlugins/group/calico/Localhost 0.23
386 TestNetworkPlugins/group/calico/HairPin 0.23
387 TestNetworkPlugins/group/custom-flannel/Start 63.09
388 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.29
389 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.3
390 TestNetworkPlugins/group/custom-flannel/DNS 0.17
391 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
392 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
x
+
TestDownloadOnly/v1.20.0/json-events (8.51s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-895081 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-895081 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (8.514502313s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (8.51s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-895081
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-895081: exit status 85 (81.984379ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-895081 | jenkins | v1.33.0 | 29 Apr 24 11:33 UTC |          |
	|         | -p download-only-895081        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 11:33:46
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 11:33:46.373923 1236980 out.go:291] Setting OutFile to fd 1 ...
	I0429 11:33:46.374128 1236980 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 11:33:46.374157 1236980 out.go:304] Setting ErrFile to fd 2...
	I0429 11:33:46.374178 1236980 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 11:33:46.374480 1236980 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18756-1231546/.minikube/bin
	W0429 11:33:46.374635 1236980 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18756-1231546/.minikube/config/config.json: open /home/jenkins/minikube-integration/18756-1231546/.minikube/config/config.json: no such file or directory
	I0429 11:33:46.375085 1236980 out.go:298] Setting JSON to true
	I0429 11:33:46.376041 1236980 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":26171,"bootTime":1714364256,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0429 11:33:46.376145 1236980 start.go:139] virtualization:  
	I0429 11:33:46.378843 1236980 out.go:97] [download-only-895081] minikube v1.33.0 on Ubuntu 20.04 (arm64)
	I0429 11:33:46.380699 1236980 out.go:169] MINIKUBE_LOCATION=18756
	W0429 11:33:46.379014 1236980 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/18756-1231546/.minikube/cache/preloaded-tarball: no such file or directory
	I0429 11:33:46.379060 1236980 notify.go:220] Checking for updates...
	I0429 11:33:46.382516 1236980 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 11:33:46.384078 1236980 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18756-1231546/kubeconfig
	I0429 11:33:46.385536 1236980 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18756-1231546/.minikube
	I0429 11:33:46.387251 1236980 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0429 11:33:46.390757 1236980 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0429 11:33:46.391046 1236980 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 11:33:46.411961 1236980 docker.go:122] docker version: linux-26.1.0:Docker Engine - Community
	I0429 11:33:46.412067 1236980 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0429 11:33:46.474868 1236980 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-04-29 11:33:46.464700136 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.1.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0429 11:33:46.474991 1236980 docker.go:295] overlay module found
	I0429 11:33:46.476549 1236980 out.go:97] Using the docker driver based on user configuration
	I0429 11:33:46.476593 1236980 start.go:297] selected driver: docker
	I0429 11:33:46.476604 1236980 start.go:901] validating driver "docker" against <nil>
	I0429 11:33:46.476750 1236980 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0429 11:33:46.533801 1236980 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-04-29 11:33:46.523879615 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.1.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0429 11:33:46.533965 1236980 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 11:33:46.534241 1236980 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0429 11:33:46.534403 1236980 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0429 11:33:46.536769 1236980 out.go:169] Using Docker driver with root privileges
	I0429 11:33:46.538713 1236980 cni.go:84] Creating CNI manager for ""
	I0429 11:33:46.538735 1236980 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0429 11:33:46.538754 1236980 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0429 11:33:46.538848 1236980 start.go:340] cluster config:
	{Name:download-only-895081 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-895081 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 11:33:46.540503 1236980 out.go:97] Starting "download-only-895081" primary control-plane node in "download-only-895081" cluster
	I0429 11:33:46.540523 1236980 cache.go:121] Beginning downloading kic base image for docker with crio
	I0429 11:33:46.542584 1236980 out.go:97] Pulling base image v0.0.43-1713736339-18706 ...
	I0429 11:33:46.542611 1236980 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0429 11:33:46.542761 1236980 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon
	I0429 11:33:46.556661 1236980 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e to local cache
	I0429 11:33:46.556876 1236980 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local cache directory
	I0429 11:33:46.556987 1236980 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e to local cache
	I0429 11:33:46.610846 1236980 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I0429 11:33:46.610881 1236980 cache.go:56] Caching tarball of preloaded images
	I0429 11:33:46.611042 1236980 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0429 11:33:46.613453 1236980 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0429 11:33:46.613478 1236980 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I0429 11:33:46.718988 1236980 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:59cd2ef07b53f039bfd1761b921f2a02 -> /home/jenkins/minikube-integration/18756-1231546/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-895081 host does not exist
	  To start a cluster, run: "minikube start -p download-only-895081"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-895081
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/json-events (7.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-665613 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-665613 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (7.374238752s)
--- PASS: TestDownloadOnly/v1.30.0/json-events (7.37s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/preload-exists
--- PASS: TestDownloadOnly/v1.30.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-665613
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-665613: exit status 85 (85.508872ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-895081 | jenkins | v1.33.0 | 29 Apr 24 11:33 UTC |                     |
	|         | -p download-only-895081        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.0 | 29 Apr 24 11:33 UTC | 29 Apr 24 11:33 UTC |
	| delete  | -p download-only-895081        | download-only-895081 | jenkins | v1.33.0 | 29 Apr 24 11:33 UTC | 29 Apr 24 11:33 UTC |
	| start   | -o=json --download-only        | download-only-665613 | jenkins | v1.33.0 | 29 Apr 24 11:33 UTC |                     |
	|         | -p download-only-665613        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 11:33:55
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 11:33:55.327840 1237148 out.go:291] Setting OutFile to fd 1 ...
	I0429 11:33:55.328078 1237148 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 11:33:55.328105 1237148 out.go:304] Setting ErrFile to fd 2...
	I0429 11:33:55.328126 1237148 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 11:33:55.328412 1237148 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18756-1231546/.minikube/bin
	I0429 11:33:55.328941 1237148 out.go:298] Setting JSON to true
	I0429 11:33:55.329928 1237148 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":26180,"bootTime":1714364256,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0429 11:33:55.330030 1237148 start.go:139] virtualization:  
	I0429 11:33:55.332368 1237148 out.go:97] [download-only-665613] minikube v1.33.0 on Ubuntu 20.04 (arm64)
	I0429 11:33:55.333988 1237148 out.go:169] MINIKUBE_LOCATION=18756
	I0429 11:33:55.332610 1237148 notify.go:220] Checking for updates...
	I0429 11:33:55.337213 1237148 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 11:33:55.338846 1237148 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18756-1231546/kubeconfig
	I0429 11:33:55.340363 1237148 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18756-1231546/.minikube
	I0429 11:33:55.342112 1237148 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0429 11:33:55.345069 1237148 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0429 11:33:55.345387 1237148 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 11:33:55.365704 1237148 docker.go:122] docker version: linux-26.1.0:Docker Engine - Community
	I0429 11:33:55.365853 1237148 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0429 11:33:55.430915 1237148 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:46 SystemTime:2024-04-29 11:33:55.421083741 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.1.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0429 11:33:55.431034 1237148 docker.go:295] overlay module found
	I0429 11:33:55.432808 1237148 out.go:97] Using the docker driver based on user configuration
	I0429 11:33:55.432834 1237148 start.go:297] selected driver: docker
	I0429 11:33:55.432848 1237148 start.go:901] validating driver "docker" against <nil>
	I0429 11:33:55.432987 1237148 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0429 11:33:55.485950 1237148 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:46 SystemTime:2024-04-29 11:33:55.476961404 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.1.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0429 11:33:55.486114 1237148 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 11:33:55.486412 1237148 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0429 11:33:55.486574 1237148 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0429 11:33:55.488258 1237148 out.go:169] Using Docker driver with root privileges
	I0429 11:33:55.489882 1237148 cni.go:84] Creating CNI manager for ""
	I0429 11:33:55.489902 1237148 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0429 11:33:55.489912 1237148 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0429 11:33:55.489992 1237148 start.go:340] cluster config:
	{Name:download-only-665613 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:download-only-665613 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 11:33:55.491817 1237148 out.go:97] Starting "download-only-665613" primary control-plane node in "download-only-665613" cluster
	I0429 11:33:55.491837 1237148 cache.go:121] Beginning downloading kic base image for docker with crio
	I0429 11:33:55.493239 1237148 out.go:97] Pulling base image v0.0.43-1713736339-18706 ...
	I0429 11:33:55.493266 1237148 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 11:33:55.493367 1237148 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon
	I0429 11:33:55.507137 1237148 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e to local cache
	I0429 11:33:55.507269 1237148 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local cache directory
	I0429 11:33:55.507295 1237148 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local cache directory, skipping pull
	I0429 11:33:55.507304 1237148 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e exists in cache, skipping pull
	I0429 11:33:55.507313 1237148 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e as a tarball
	I0429 11:33:55.557600 1237148 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-arm64.tar.lz4
	I0429 11:33:55.557628 1237148 cache.go:56] Caching tarball of preloaded images
	I0429 11:33:55.558203 1237148 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 11:33:55.559918 1237148 out.go:97] Downloading Kubernetes v1.30.0 preload ...
	I0429 11:33:55.559938 1237148 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-arm64.tar.lz4 ...
	I0429 11:33:55.655051 1237148 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:0b6b385f66a101b8e819a9a918236667 -> /home/jenkins/minikube-integration/18756-1231546/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-665613 host does not exist
	  To start a cluster, run: "minikube start -p download-only-665613"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-665613
--- PASS: TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.57s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-725376 --alsologtostderr --binary-mirror http://127.0.0.1:45633 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-725376" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-725376
--- PASS: TestBinaryMirror (0.57s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-760922
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-760922: exit status 85 (84.213051ms)

                                                
                                                
-- stdout --
	* Profile "addons-760922" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-760922"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-760922
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-760922: exit status 85 (91.129197ms)

                                                
                                                
-- stdout --
	* Profile "addons-760922" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-760922"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/Setup (205.7s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-arm64 start -p addons-760922 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Done: out/minikube-linux-arm64 start -p addons-760922 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns: (3m25.70051742s)
--- PASS: TestAddons/Setup (205.70s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.47s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 51.408208ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-tj9l7" [9bb8489a-b110-4d66-afb6-a31def145ada] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004170756s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-m8xkv" [33630f46-f313-4cc6-9d44-213e6df0c519] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005163575s
addons_test.go:340: (dbg) Run:  kubectl --context addons-760922 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-760922 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-760922 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.40759963s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-arm64 -p addons-760922 ip
2024/04/29 11:37:48 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-arm64 -p addons-760922 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (18.47s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.73s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-chpr5" [78b4d6be-7cc5-4a99-b80e-d7ee97f092bb] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003608797s
addons_test.go:841: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-760922
addons_test.go:841: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-760922: (5.722345721s)
--- PASS: TestAddons/parallel/InspektorGadget (10.73s)

                                                
                                    
x
+
TestAddons/parallel/CSI (51.67s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 61.210343ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-760922 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-760922 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-760922 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-760922 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-760922 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-760922 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-760922 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-760922 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-760922 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-760922 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-760922 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-760922 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [e9056764-8e5d-44d7-bfa1-aa33e3c51255] Pending
helpers_test.go:344: "task-pv-pod" [e9056764-8e5d-44d7-bfa1-aa33e3c51255] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [e9056764-8e5d-44d7-bfa1-aa33e3c51255] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.003992422s
addons_test.go:584: (dbg) Run:  kubectl --context addons-760922 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-760922 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-760922 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-760922 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-760922 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-760922 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-760922 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-760922 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-760922 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-760922 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-760922 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-760922 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-760922 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-760922 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-760922 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-760922 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-760922 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-760922 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-760922 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [1ebb8517-e6fb-4a86-955a-1b3d27a46318] Pending
helpers_test.go:344: "task-pv-pod-restore" [1ebb8517-e6fb-4a86-955a-1b3d27a46318] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [1ebb8517-e6fb-4a86-955a-1b3d27a46318] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003600426s
addons_test.go:626: (dbg) Run:  kubectl --context addons-760922 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-760922 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-760922 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-arm64 -p addons-760922 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-arm64 -p addons-760922 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.840714381s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-arm64 -p addons-760922 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (51.67s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.06s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-760922 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-760922 --alsologtostderr -v=1: (1.05358888s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7559bf459f-8k4nc" [6cf9fc34-3aaf-4da5-9502-9dd3e6ed9510] Pending
helpers_test.go:344: "headlamp-7559bf459f-8k4nc" [6cf9fc34-3aaf-4da5-9502-9dd3e6ed9510] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7559bf459f-8k4nc" [6cf9fc34-3aaf-4da5-9502-9dd3e6ed9510] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.004171163s
--- PASS: TestAddons/parallel/Headlamp (13.06s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.89s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-8677549d7-qldl5" [04f17a68-50d6-4e39-a129-724da2010f14] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003131805s
addons_test.go:860: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-760922
--- PASS: TestAddons/parallel/CloudSpanner (6.89s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.38s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-760922 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-760922 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-760922 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-760922 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-760922 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-760922 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-760922 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [48f663ec-a4c0-4b1c-a810-acfca97a1e98] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [48f663ec-a4c0-4b1c-a810-acfca97a1e98] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [48f663ec-a4c0-4b1c-a810-acfca97a1e98] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.004445631s
addons_test.go:891: (dbg) Run:  kubectl --context addons-760922 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-arm64 -p addons-760922 ssh "cat /opt/local-path-provisioner/pvc-bb91cfb2-1bc0-483a-82bf-c8a42280a852_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-760922 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-760922 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-arm64 -p addons-760922 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (8.38s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.75s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-7lk7c" [68690e1d-7f8a-4423-aaed-674894ca372a] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004361204s
addons_test.go:955: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-760922
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.75s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-5ddbf7d777-jjvnb" [84426b9e-acf8-44f2-be2d-5ec8f641e460] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003963468s
--- PASS: TestAddons/parallel/Yakd (6.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-760922 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-760922 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.23s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-760922
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-760922: (11.913523403s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-760922
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-760922
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-760922
--- PASS: TestAddons/StoppedEnableDisable (12.23s)

                                                
                                    
x
+
TestCertOptions (33.89s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-002271 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-002271 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (31.227876005s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-002271 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-002271 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-002271 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-002271" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-002271
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-002271: (2.021963517s)
--- PASS: TestCertOptions (33.89s)

                                                
                                    
x
+
TestCertExpiration (239.74s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-059027 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-059027 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (38.628517284s)
E0429 12:25:48.412802 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/functional-179378/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-059027 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-059027 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (18.589013823s)
helpers_test.go:175: Cleaning up "cert-expiration-059027" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-059027
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-059027: (2.520877809s)
--- PASS: TestCertExpiration (239.74s)

                                                
                                    
x
+
TestForceSystemdFlag (38.29s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-937633 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-937633 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (35.656723898s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-937633 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-937633" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-937633
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-937633: (2.344538186s)
--- PASS: TestForceSystemdFlag (38.29s)

                                                
                                    
x
+
TestForceSystemdEnv (40.69s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-841567 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-841567 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (38.02847956s)
helpers_test.go:175: Cleaning up "force-systemd-env-841567" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-841567
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-841567: (2.66442229s)
--- PASS: TestForceSystemdEnv (40.69s)

                                                
                                    
x
+
TestErrorSpam/setup (34.37s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-253726 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-253726 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-253726 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-253726 --driver=docker  --container-runtime=crio: (34.370001991s)
--- PASS: TestErrorSpam/setup (34.37s)

                                                
                                    
x
+
TestErrorSpam/start (0.71s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-253726 --log_dir /tmp/nospam-253726 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-253726 --log_dir /tmp/nospam-253726 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-253726 --log_dir /tmp/nospam-253726 start --dry-run
--- PASS: TestErrorSpam/start (0.71s)

                                                
                                    
x
+
TestErrorSpam/status (1.33s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-253726 --log_dir /tmp/nospam-253726 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-253726 --log_dir /tmp/nospam-253726 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-253726 --log_dir /tmp/nospam-253726 status
--- PASS: TestErrorSpam/status (1.33s)

                                                
                                    
x
+
TestErrorSpam/pause (1.78s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-253726 --log_dir /tmp/nospam-253726 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-253726 --log_dir /tmp/nospam-253726 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-253726 --log_dir /tmp/nospam-253726 pause
--- PASS: TestErrorSpam/pause (1.78s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.81s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-253726 --log_dir /tmp/nospam-253726 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-253726 --log_dir /tmp/nospam-253726 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-253726 --log_dir /tmp/nospam-253726 unpause
--- PASS: TestErrorSpam/unpause (1.81s)

                                                
                                    
x
+
TestErrorSpam/stop (1.48s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-253726 --log_dir /tmp/nospam-253726 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-253726 --log_dir /tmp/nospam-253726 stop: (1.273976485s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-253726 --log_dir /tmp/nospam-253726 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-253726 --log_dir /tmp/nospam-253726 stop
--- PASS: TestErrorSpam/stop (1.48s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18756-1231546/.minikube/files/etc/test/nested/copy/1236974/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (75.15s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-179378 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-179378 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m15.15031312s)
--- PASS: TestFunctional/serial/StartWithProxy (75.15s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (20.81s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-179378 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-179378 --alsologtostderr -v=8: (20.810469216s)
functional_test.go:659: soft start took 20.811015249s for "functional-179378" cluster.
--- PASS: TestFunctional/serial/SoftStart (20.81s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-179378 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.45s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-179378 cache add registry.k8s.io/pause:3.1: (1.230381334s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-179378 cache add registry.k8s.io/pause:3.3: (1.175349241s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-179378 cache add registry.k8s.io/pause:latest: (1.043197191s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.45s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-179378 /tmp/TestFunctionalserialCacheCmdcacheadd_local1334894571/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 cache add minikube-local-cache-test:functional-179378
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 cache delete minikube-local-cache-test:functional-179378
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-179378
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.89s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-179378 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (295.214399ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.89s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 kubectl -- --context functional-179378 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-179378 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.74s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-179378 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0429 11:47:30.300563 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/client.crt: no such file or directory
E0429 11:47:30.306777 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/client.crt: no such file or directory
E0429 11:47:30.317095 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/client.crt: no such file or directory
E0429 11:47:30.337387 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/client.crt: no such file or directory
E0429 11:47:30.377711 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/client.crt: no such file or directory
E0429 11:47:30.458005 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/client.crt: no such file or directory
E0429 11:47:30.618392 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/client.crt: no such file or directory
E0429 11:47:30.938928 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/client.crt: no such file or directory
E0429 11:47:31.579950 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/client.crt: no such file or directory
E0429 11:47:32.860743 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/client.crt: no such file or directory
E0429 11:47:35.421676 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-179378 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.742419312s)
functional_test.go:757: restart took 41.742550167s for "functional-179378" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (41.74s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-179378 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.68s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-179378 logs: (1.676241406s)
--- PASS: TestFunctional/serial/LogsCmd (1.68s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.7s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 logs --file /tmp/TestFunctionalserialLogsFileCmd1521641922/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-179378 logs --file /tmp/TestFunctionalserialLogsFileCmd1521641922/001/logs.txt: (1.701755186s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.70s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.46s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-179378 apply -f testdata/invalidsvc.yaml
E0429 11:47:40.542869 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/client.crt: no such file or directory
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-179378
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-179378: exit status 115 (652.738926ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32527 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-179378 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.46s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-179378 config get cpus: exit status 14 (87.786856ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-179378 config get cpus: exit status 14 (84.210672ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-179378 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-179378 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1263321: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.10s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-179378 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-179378 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (299.643462ms)

                                                
                                                
-- stdout --
	* [functional-179378] minikube v1.33.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18756
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18756-1231546/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18756-1231546/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 11:48:17.812194 1262774 out.go:291] Setting OutFile to fd 1 ...
	I0429 11:48:17.819298 1262774 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 11:48:17.819322 1262774 out.go:304] Setting ErrFile to fd 2...
	I0429 11:48:17.819329 1262774 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 11:48:17.819740 1262774 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18756-1231546/.minikube/bin
	I0429 11:48:17.821267 1262774 out.go:298] Setting JSON to false
	I0429 11:48:17.822795 1262774 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":27042,"bootTime":1714364256,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0429 11:48:17.823905 1262774 start.go:139] virtualization:  
	I0429 11:48:17.832137 1262774 out.go:177] * [functional-179378] minikube v1.33.0 on Ubuntu 20.04 (arm64)
	I0429 11:48:17.834892 1262774 out.go:177]   - MINIKUBE_LOCATION=18756
	I0429 11:48:17.837599 1262774 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 11:48:17.835139 1262774 notify.go:220] Checking for updates...
	I0429 11:48:17.843585 1262774 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18756-1231546/kubeconfig
	I0429 11:48:17.846048 1262774 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18756-1231546/.minikube
	I0429 11:48:17.848713 1262774 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0429 11:48:17.851416 1262774 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 11:48:17.854504 1262774 config.go:182] Loaded profile config "functional-179378": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 11:48:17.855009 1262774 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 11:48:17.891411 1262774 docker.go:122] docker version: linux-26.1.0:Docker Engine - Community
	I0429 11:48:17.891522 1262774 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0429 11:48:17.996147 1262774 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:53 SystemTime:2024-04-29 11:48:17.986967063 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.1.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0429 11:48:17.996281 1262774 docker.go:295] overlay module found
	I0429 11:48:17.999096 1262774 out.go:177] * Using the docker driver based on existing profile
	I0429 11:48:18.004474 1262774 start.go:297] selected driver: docker
	I0429 11:48:18.004513 1262774 start.go:901] validating driver "docker" against &{Name:functional-179378 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:functional-179378 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 11:48:18.004639 1262774 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 11:48:18.008052 1262774 out.go:177] 
	W0429 11:48:18.010715 1262774 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0429 11:48:18.013516 1262774 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-179378 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-179378 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-179378 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (260.202987ms)

                                                
                                                
-- stdout --
	* [functional-179378] minikube v1.33.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18756
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18756-1231546/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18756-1231546/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 11:48:17.528981 1262722 out.go:291] Setting OutFile to fd 1 ...
	I0429 11:48:17.529136 1262722 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 11:48:17.529161 1262722 out.go:304] Setting ErrFile to fd 2...
	I0429 11:48:17.529177 1262722 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 11:48:17.529532 1262722 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18756-1231546/.minikube/bin
	I0429 11:48:17.529988 1262722 out.go:298] Setting JSON to false
	I0429 11:48:17.531340 1262722 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":27042,"bootTime":1714364256,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0429 11:48:17.531414 1262722 start.go:139] virtualization:  
	I0429 11:48:17.534771 1262722 out.go:177] * [functional-179378] minikube v1.33.0 sur Ubuntu 20.04 (arm64)
	I0429 11:48:17.538665 1262722 out.go:177]   - MINIKUBE_LOCATION=18756
	I0429 11:48:17.541474 1262722 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 11:48:17.538746 1262722 notify.go:220] Checking for updates...
	I0429 11:48:17.546339 1262722 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18756-1231546/kubeconfig
	I0429 11:48:17.548929 1262722 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18756-1231546/.minikube
	I0429 11:48:17.551416 1262722 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0429 11:48:17.553746 1262722 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 11:48:17.556290 1262722 config.go:182] Loaded profile config "functional-179378": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 11:48:17.556856 1262722 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 11:48:17.591105 1262722 docker.go:122] docker version: linux-26.1.0:Docker Engine - Community
	I0429 11:48:17.591223 1262722 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0429 11:48:17.701845 1262722 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:53 SystemTime:2024-04-29 11:48:17.690089763 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.1.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0429 11:48:17.701946 1262722 docker.go:295] overlay module found
	I0429 11:48:17.704737 1262722 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0429 11:48:17.707558 1262722 start.go:297] selected driver: docker
	I0429 11:48:17.707577 1262722 start.go:901] validating driver "docker" against &{Name:functional-179378 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:functional-179378 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 11:48:17.707690 1262722 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 11:48:17.710611 1262722 out.go:177] 
	W0429 11:48:17.712734 1262722 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0429 11:48:17.714966 1262722 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-179378 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-179378 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-6f49f58cd5-rvgd4" [c1c3c194-be5a-4272-8ca6-b725690f0055] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-6f49f58cd5-rvgd4" [c1c3c194-be5a-4272-8ca6-b725690f0055] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.003934758s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.49.2:32366
functional_test.go:1671: http://192.168.49.2:32366: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-6f49f58cd5-rvgd4

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32366
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.62s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [8c9b43ec-6f8d-489d-99ed-7594e2eff759] Running
E0429 11:47:50.783111 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/client.crt: no such file or directory
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.005630006s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-179378 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-179378 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-179378 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-179378 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [eac8ec0a-7168-4092-90e4-b8f78f3fa08d] Pending
helpers_test.go:344: "sp-pod" [eac8ec0a-7168-4092-90e4-b8f78f3fa08d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [eac8ec0a-7168-4092-90e4-b8f78f3fa08d] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.003476313s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-179378 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-179378 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-179378 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [121813a9-5d60-471a-bb71-8957b1eff40f] Pending
helpers_test.go:344: "sp-pod" [121813a9-5d60-471a-bb71-8957b1eff40f] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003581762s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-179378 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.76s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 ssh -n functional-179378 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 cp functional-179378:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3200714576/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 ssh -n functional-179378 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 ssh -n functional-179378 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.40s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1236974/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 ssh "sudo cat /etc/test/nested/copy/1236974/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1236974.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 ssh "sudo cat /etc/ssl/certs/1236974.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1236974.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 ssh "sudo cat /usr/share/ca-certificates/1236974.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/12369742.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 ssh "sudo cat /etc/ssl/certs/12369742.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/12369742.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 ssh "sudo cat /usr/share/ca-certificates/12369742.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.18s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-179378 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-179378 ssh "sudo systemctl is-active docker": exit status 1 (356.482579ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-179378 ssh "sudo systemctl is-active containerd": exit status 1 (375.714917ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-179378 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-179378 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-179378 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1260620: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-179378 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-179378 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-179378 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [9aad5a06-12f3-48fb-8b69-6eb6e879800c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [9aad5a06-12f3-48fb-8b69-6eb6e879800c] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.004118615s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.49s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-179378 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.105.161.127 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-179378 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-179378 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-179378 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-65f5d5cc78-q48qg" [66b2ce14-d50e-4528-a8b9-fd8d5c8e4f46] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-65f5d5cc78-q48qg" [66b2ce14-d50e-4528-a8b9-fd8d5c8e4f46] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.003406345s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-arm64 profile list
E0429 11:48:11.263568 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/client.crt: no such file or directory
functional_test.go:1311: Took "334.177297ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1325: Took "65.94573ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1362: Took "317.782514ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1375: Took "60.876492ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-179378 /tmp/TestFunctionalparallelMountCmdany-port2417406048/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1714391291861418059" to /tmp/TestFunctionalparallelMountCmdany-port2417406048/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1714391291861418059" to /tmp/TestFunctionalparallelMountCmdany-port2417406048/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1714391291861418059" to /tmp/TestFunctionalparallelMountCmdany-port2417406048/001/test-1714391291861418059
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-179378 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (319.474073ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Apr 29 11:48 created-by-test
-rw-r--r-- 1 docker docker 24 Apr 29 11:48 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Apr 29 11:48 test-1714391291861418059
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 ssh cat /mount-9p/test-1714391291861418059
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-179378 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [cbac6ea4-c430-4ad5-b912-4ad7c0587f66] Pending
helpers_test.go:344: "busybox-mount" [cbac6ea4-c430-4ad5-b912-4ad7c0587f66] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [cbac6ea4-c430-4ad5-b912-4ad7c0587f66] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [cbac6ea4-c430-4ad5-b912-4ad7c0587f66] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003465647s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-179378 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-179378 /tmp/TestFunctionalparallelMountCmdany-port2417406048/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 service list -o json
functional_test.go:1490: Took "516.725043ms" to run "out/minikube-linux-arm64 -p functional-179378 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.49.2:32721
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.49.2:32721
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-179378 /tmp/TestFunctionalparallelMountCmdspecific-port3017142707/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-179378 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (500.098425ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-179378 /tmp/TestFunctionalparallelMountCmdspecific-port3017142707/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-179378 ssh "sudo umount -f /mount-9p": exit status 1 (268.238115ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-179378 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-179378 /tmp/TestFunctionalparallelMountCmdspecific-port3017142707/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.18s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-179378 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1557810619/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-179378 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1557810619/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-179378 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1557810619/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-179378 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-179378 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1557810619/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-179378 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1557810619/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-179378 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1557810619/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-arm64 -p functional-179378 version -o=json --components: (1.325264785s)
--- PASS: TestFunctional/parallel/Version/components (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-179378 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.0
registry.k8s.io/kube-proxy:v1.30.0
registry.k8s.io/kube-controller-manager:v1.30.0
registry.k8s.io/kube-apiserver:v1.30.0
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-179378
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20240202-8f1494ea
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-179378 image ls --format short --alsologtostderr:
I0429 11:48:44.793212 1265235 out.go:291] Setting OutFile to fd 1 ...
I0429 11:48:44.793422 1265235 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 11:48:44.793434 1265235 out.go:304] Setting ErrFile to fd 2...
I0429 11:48:44.793440 1265235 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 11:48:44.793740 1265235 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18756-1231546/.minikube/bin
I0429 11:48:44.794444 1265235 config.go:182] Loaded profile config "functional-179378": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0429 11:48:44.794598 1265235 config.go:182] Loaded profile config "functional-179378": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0429 11:48:44.795135 1265235 cli_runner.go:164] Run: docker container inspect functional-179378 --format={{.State.Status}}
I0429 11:48:44.820329 1265235 ssh_runner.go:195] Run: systemctl --version
I0429 11:48:44.820379 1265235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-179378
I0429 11:48:44.838734 1265235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34288 SSHKeyPath:/home/jenkins/minikube-integration/18756-1231546/.minikube/machines/functional-179378/id_rsa Username:docker}
I0429 11:48:44.929039 1265235 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-179378 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| registry.k8s.io/kube-scheduler          | v1.30.0            | 547adae34140b | 61.6MB |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
| docker.io/kindest/kindnetd              | v20240202-8f1494ea | 4740c1948d3fc | 60.9MB |
| gcr.io/google-containers/addon-resizer  | functional-179378  | ffd4cfbbe753e | 34.1MB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | 2437cf7621777 | 58.8MB |
| registry.k8s.io/kube-proxy              | v1.30.0            | cb7eac0b42cc1 | 89.1MB |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| docker.io/library/nginx                 | alpine             | e664fb1e82890 | 51.5MB |
| registry.k8s.io/kube-controller-manager | v1.30.0            | 68feac521c0f1 | 108MB  |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| docker.io/library/nginx                 | latest             | 786a14303c960 | 197MB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/etcd                    | 3.5.12-0           | 014faa467e297 | 140MB  |
| registry.k8s.io/kube-apiserver          | v1.30.0            | 181f57fd3cdb7 | 114MB  |
| registry.k8s.io/pause                   | 3.9                | 829e9de338bd5 | 520kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-179378 image ls --format table --alsologtostderr:
I0429 11:48:45.504081 1265377 out.go:291] Setting OutFile to fd 1 ...
I0429 11:48:45.504307 1265377 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 11:48:45.504334 1265377 out.go:304] Setting ErrFile to fd 2...
I0429 11:48:45.504352 1265377 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 11:48:45.504701 1265377 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18756-1231546/.minikube/bin
I0429 11:48:45.505858 1265377 config.go:182] Loaded profile config "functional-179378": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0429 11:48:45.506292 1265377 config.go:182] Loaded profile config "functional-179378": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0429 11:48:45.506916 1265377 cli_runner.go:164] Run: docker container inspect functional-179378 --format={{.State.Status}}
I0429 11:48:45.536053 1265377 ssh_runner.go:195] Run: systemctl --version
I0429 11:48:45.536113 1265377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-179378
I0429 11:48:45.556008 1265377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34288 SSHKeyPath:/home/jenkins/minikube-integration/18756-1231546/.minikube/machines/functional-179378/id_rsa Username:docker}
I0429 11:48:45.642157 1265377 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-179378 image ls --format json --alsologtostderr:
[{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-179378"],"size":"34114467"},{"id":"181f57fd3cdb796d3b94d5a1c86bf48ec261d75965d1b7c328f1d7c11f79f0bb","repoDigests":["registry.k8s.io/kube-apiserver@sha256:603450584095e9beb21ab73002fcd49b6e10f6b0194f1e64cca2e3cffa13123e","registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.0"],"size":"113538528"},{"id":"547adae34140be47cdc0d9f3282b6184ef76154c44cf43fc7edd0685e61ab73a","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0e04e710e758152f5f46761588d3e712c5b836839443b9c2c2d45ee511b803e9","registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.
0"],"size":"61568326"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"520014"},{"id":"cb7eac0b42cc1efe8ef8d69652c7c0babbf9ab418daca7fe90ddb8b1ab68389f","repoDigests":["registry.k8s.io/kube-proxy@sha256:a744a3a6db8ed022077d83357b93766fc252bcf01c572b3c3687c80e1e5faa55","registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.0"],"size":"89133975"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae45
78e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d","repoDigests":["docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988","docker.io/kindest/kindnetd@sha256:fde0f6062db0a3b3323d76a4cde031f0f891b5b79d12be642b7e5aad68f2836f"],"repoTags":["docker.io/kindest/kindnetd:v20240202-8f1494ea"],"size":"60940831"},{"id":"786a14303c96017fa81cc9756e01811a67bfabba40e5624f453ff2981e501db0","repoDigests":["docker.io/library/nginx@sha256:57cd68207d5a1ebf40d1b686feb8852e6507f4bdbdbe178c5924b9232653a532","docker.io/library/nginx@sha256:ed6d2c43c8fbcd3eaa44c9dab6d94cb346234476230dc1681227aa72d07181ee"],"repoTags":["docker.io/library/nginx:latest"],"size":"197029840"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storag
e-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"68feac521c0f104bef927614ce0960d6fcddf98bd42f039c98b7d4a82294d6f1","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe","registry.k8s.io/kube-controller-manager@sha256:63e991c4fc8bdc8fce68c183d152ba3ab560dc0a9b71ff97332a74a7605bbd3f"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.0"],"size":"108229958"},{"id":"e664fb1e82890e5cf53c130a021c0333d897bad1f2406eac7edb29cd41df6b10","repoDige
sts":["docker.io/library/nginx@sha256:1f37baf7373d386ee9de0437325ae3e0202a3959803fd79144fa0bb27e2b2801","docker.io/library/nginx@sha256:fdbfdaea4fc323f44590e9afeb271da8c345a733bf44c4ad7861201676a95f42"],"repoTags":["docker.io/library/nginx:alpine"],"size":"51540272"},{"id":"014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd","repoDigests":["registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b","registry.k8s.io/etcd@sha256:675d0e055ad04d9d6227bbb1aa88626bb1903a8b9b177c0353c6e1b3112952ad"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"140414767"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","r
epoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:ba9e70dbdf0ff8a77ea63451bb1241d08819471730fe7a35a218a8db2ef7890c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"58812704"},{"id":"80
57e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-179378 image ls --format json --alsologtostderr:
I0429 11:48:45.148121 1265297 out.go:291] Setting OutFile to fd 1 ...
I0429 11:48:45.148462 1265297 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 11:48:45.148499 1265297 out.go:304] Setting ErrFile to fd 2...
I0429 11:48:45.148520 1265297 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 11:48:45.148914 1265297 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18756-1231546/.minikube/bin
I0429 11:48:45.149733 1265297 config.go:182] Loaded profile config "functional-179378": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0429 11:48:45.152324 1265297 config.go:182] Loaded profile config "functional-179378": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0429 11:48:45.153243 1265297 cli_runner.go:164] Run: docker container inspect functional-179378 --format={{.State.Status}}
I0429 11:48:45.199201 1265297 ssh_runner.go:195] Run: systemctl --version
I0429 11:48:45.199265 1265297 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-179378
I0429 11:48:45.230854 1265297 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34288 SSHKeyPath:/home/jenkins/minikube-integration/18756-1231546/.minikube/machines/functional-179378/id_rsa Username:docker}
I0429 11:48:45.334257 1265297 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-179378 image ls --format yaml --alsologtostderr:
- id: 014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd
repoDigests:
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
- registry.k8s.io/etcd@sha256:675d0e055ad04d9d6227bbb1aa88626bb1903a8b9b177c0353c6e1b3112952ad
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "140414767"
- id: cb7eac0b42cc1efe8ef8d69652c7c0babbf9ab418daca7fe90ddb8b1ab68389f
repoDigests:
- registry.k8s.io/kube-proxy@sha256:a744a3a6db8ed022077d83357b93766fc252bcf01c572b3c3687c80e1e5faa55
- registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210
repoTags:
- registry.k8s.io/kube-proxy:v1.30.0
size: "89133975"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d
repoDigests:
- docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988
- docker.io/kindest/kindnetd@sha256:fde0f6062db0a3b3323d76a4cde031f0f891b5b79d12be642b7e5aad68f2836f
repoTags:
- docker.io/kindest/kindnetd:v20240202-8f1494ea
size: "60940831"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: e664fb1e82890e5cf53c130a021c0333d897bad1f2406eac7edb29cd41df6b10
repoDigests:
- docker.io/library/nginx@sha256:1f37baf7373d386ee9de0437325ae3e0202a3959803fd79144fa0bb27e2b2801
- docker.io/library/nginx@sha256:fdbfdaea4fc323f44590e9afeb271da8c345a733bf44c4ad7861201676a95f42
repoTags:
- docker.io/library/nginx:alpine
size: "51540272"
- id: 786a14303c96017fa81cc9756e01811a67bfabba40e5624f453ff2981e501db0
repoDigests:
- docker.io/library/nginx@sha256:57cd68207d5a1ebf40d1b686feb8852e6507f4bdbdbe178c5924b9232653a532
- docker.io/library/nginx@sha256:ed6d2c43c8fbcd3eaa44c9dab6d94cb346234476230dc1681227aa72d07181ee
repoTags:
- docker.io/library/nginx:latest
size: "197029840"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 68feac521c0f104bef927614ce0960d6fcddf98bd42f039c98b7d4a82294d6f1
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe
- registry.k8s.io/kube-controller-manager@sha256:63e991c4fc8bdc8fce68c183d152ba3ab560dc0a9b71ff97332a74a7605bbd3f
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.0
size: "108229958"
- id: 547adae34140be47cdc0d9f3282b6184ef76154c44cf43fc7edd0685e61ab73a
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0e04e710e758152f5f46761588d3e712c5b836839443b9c2c2d45ee511b803e9
- registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.0
size: "61568326"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:ba9e70dbdf0ff8a77ea63451bb1241d08819471730fe7a35a218a8db2ef7890c
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "58812704"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: 181f57fd3cdb796d3b94d5a1c86bf48ec261d75965d1b7c328f1d7c11f79f0bb
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:603450584095e9beb21ab73002fcd49b6e10f6b0194f1e64cca2e3cffa13123e
- registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.0
size: "113538528"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "520014"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-179378
size: "34114467"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-179378 image ls --format yaml --alsologtostderr:
I0429 11:48:44.799410 1265234 out.go:291] Setting OutFile to fd 1 ...
I0429 11:48:44.799617 1265234 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 11:48:44.799639 1265234 out.go:304] Setting ErrFile to fd 2...
I0429 11:48:44.799657 1265234 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 11:48:44.799914 1265234 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18756-1231546/.minikube/bin
I0429 11:48:44.800579 1265234 config.go:182] Loaded profile config "functional-179378": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0429 11:48:44.800749 1265234 config.go:182] Loaded profile config "functional-179378": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0429 11:48:44.801254 1265234 cli_runner.go:164] Run: docker container inspect functional-179378 --format={{.State.Status}}
I0429 11:48:44.817843 1265234 ssh_runner.go:195] Run: systemctl --version
I0429 11:48:44.817897 1265234 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-179378
I0429 11:48:44.841358 1265234 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34288 SSHKeyPath:/home/jenkins/minikube-integration/18756-1231546/.minikube/machines/functional-179378/id_rsa Username:docker}
I0429 11:48:44.938167 1265234 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-179378 ssh pgrep buildkitd: exit status 1 (392.846939ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 image build -t localhost/my-image:functional-179378 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-179378 image build -t localhost/my-image:functional-179378 testdata/build --alsologtostderr: (2.174245683s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-arm64 -p functional-179378 image build -t localhost/my-image:functional-179378 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> b82932628c1
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-179378
--> 1ee54b51934
Successfully tagged localhost/my-image:functional-179378
1ee54b51934046e9aa4d5da620b979db63f571e3df1d61a3f161e6ac6549b015
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-179378 image build -t localhost/my-image:functional-179378 testdata/build --alsologtostderr:
I0429 11:48:45.464970 1265372 out.go:291] Setting OutFile to fd 1 ...
I0429 11:48:45.465688 1265372 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 11:48:45.465701 1265372 out.go:304] Setting ErrFile to fd 2...
I0429 11:48:45.465707 1265372 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 11:48:45.466053 1265372 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18756-1231546/.minikube/bin
I0429 11:48:45.466943 1265372 config.go:182] Loaded profile config "functional-179378": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0429 11:48:45.467619 1265372 config.go:182] Loaded profile config "functional-179378": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0429 11:48:45.468151 1265372 cli_runner.go:164] Run: docker container inspect functional-179378 --format={{.State.Status}}
I0429 11:48:45.490964 1265372 ssh_runner.go:195] Run: systemctl --version
I0429 11:48:45.491017 1265372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-179378
I0429 11:48:45.511347 1265372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34288 SSHKeyPath:/home/jenkins/minikube-integration/18756-1231546/.minikube/machines/functional-179378/id_rsa Username:docker}
I0429 11:48:45.600837 1265372 build_images.go:161] Building image from path: /tmp/build.1732549614.tar
I0429 11:48:45.600910 1265372 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0429 11:48:45.609533 1265372 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1732549614.tar
I0429 11:48:45.612837 1265372 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1732549614.tar: stat -c "%s %y" /var/lib/minikube/build/build.1732549614.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1732549614.tar': No such file or directory
I0429 11:48:45.612860 1265372 ssh_runner.go:362] scp /tmp/build.1732549614.tar --> /var/lib/minikube/build/build.1732549614.tar (3072 bytes)
I0429 11:48:45.638434 1265372 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1732549614
I0429 11:48:45.648045 1265372 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1732549614 -xf /var/lib/minikube/build/build.1732549614.tar
I0429 11:48:45.657780 1265372 crio.go:315] Building image: /var/lib/minikube/build/build.1732549614
I0429 11:48:45.657844 1265372 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-179378 /var/lib/minikube/build/build.1732549614 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I0429 11:48:47.523490 1265372 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-179378 /var/lib/minikube/build/build.1732549614 --cgroup-manager=cgroupfs: (1.865617287s)
I0429 11:48:47.523567 1265372 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1732549614
I0429 11:48:47.532248 1265372 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1732549614.tar
I0429 11:48:47.540761 1265372 build_images.go:217] Built localhost/my-image:functional-179378 from /tmp/build.1732549614.tar
I0429 11:48:47.540844 1265372 build_images.go:133] succeeded building to: functional-179378
I0429 11:48:47.540857 1265372 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.464776314s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-179378
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (6.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 image load --daemon gcr.io/google-containers/addon-resizer:functional-179378 --alsologtostderr
2024/04/29 11:48:28 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-179378 image load --daemon gcr.io/google-containers/addon-resizer:functional-179378 --alsologtostderr: (5.976963457s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (6.25s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 image load --daemon gcr.io/google-containers/addon-resizer:functional-179378 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-179378 image load --daemon gcr.io/google-containers/addon-resizer:functional-179378 --alsologtostderr: (2.916504809s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.590575421s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-179378
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 image load --daemon gcr.io/google-containers/addon-resizer:functional-179378 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-179378 image load --daemon gcr.io/google-containers/addon-resizer:functional-179378 --alsologtostderr: (3.615616533s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 image save gcr.io/google-containers/addon-resizer:functional-179378 /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 image rm gcr.io/google-containers/addon-resizer:functional-179378 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-arm64 -p functional-179378 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr: (1.054014628s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-179378
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-179378 image save --daemon gcr.io/google-containers/addon-resizer:functional-179378 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-179378
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.93s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-179378
--- PASS: TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-179378
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-179378
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (162.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-064493 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0429 11:48:52.223787 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/client.crt: no such file or directory
E0429 11:50:14.144293 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-064493 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (2m41.516586757s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-064493 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (162.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (9.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-064493 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-064493 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-064493 -- rollout status deployment/busybox: (5.981394389s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-064493 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-064493 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-064493 -- exec busybox-fc5497c4f-5grtv -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-064493 -- exec busybox-fc5497c4f-kgwxk -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-064493 -- exec busybox-fc5497c4f-md9j9 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-064493 -- exec busybox-fc5497c4f-5grtv -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-064493 -- exec busybox-fc5497c4f-kgwxk -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-064493 -- exec busybox-fc5497c4f-md9j9 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-064493 -- exec busybox-fc5497c4f-5grtv -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-064493 -- exec busybox-fc5497c4f-kgwxk -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-064493 -- exec busybox-fc5497c4f-md9j9 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (9.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-064493 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-064493 -- exec busybox-fc5497c4f-5grtv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-064493 -- exec busybox-fc5497c4f-5grtv -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-064493 -- exec busybox-fc5497c4f-kgwxk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-064493 -- exec busybox-fc5497c4f-kgwxk -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-064493 -- exec busybox-fc5497c4f-md9j9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-064493 -- exec busybox-fc5497c4f-md9j9 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (27.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-064493 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-064493 -v=7 --alsologtostderr: (26.653604384s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-064493 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-064493 status -v=7 --alsologtostderr: (1.021571424s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (27.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-064493 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (18.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-064493 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-064493 cp testdata/cp-test.txt ha-064493:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-064493 ssh -n ha-064493 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-064493 cp ha-064493:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2459443306/001/cp-test_ha-064493.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-064493 ssh -n ha-064493 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-064493 cp ha-064493:/home/docker/cp-test.txt ha-064493-m02:/home/docker/cp-test_ha-064493_ha-064493-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-064493 ssh -n ha-064493 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-064493 ssh -n ha-064493-m02 "sudo cat /home/docker/cp-test_ha-064493_ha-064493-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-064493 cp ha-064493:/home/docker/cp-test.txt ha-064493-m03:/home/docker/cp-test_ha-064493_ha-064493-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-064493 ssh -n ha-064493 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-064493 ssh -n ha-064493-m03 "sudo cat /home/docker/cp-test_ha-064493_ha-064493-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-064493 cp ha-064493:/home/docker/cp-test.txt ha-064493-m04:/home/docker/cp-test_ha-064493_ha-064493-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-064493 ssh -n ha-064493 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-064493 ssh -n ha-064493-m04 "sudo cat /home/docker/cp-test_ha-064493_ha-064493-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-064493 cp testdata/cp-test.txt ha-064493-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-064493 ssh -n ha-064493-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-064493 cp ha-064493-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2459443306/001/cp-test_ha-064493-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-064493 ssh -n ha-064493-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-064493 cp ha-064493-m02:/home/docker/cp-test.txt ha-064493:/home/docker/cp-test_ha-064493-m02_ha-064493.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-064493 ssh -n ha-064493-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-064493 ssh -n ha-064493 "sudo cat /home/docker/cp-test_ha-064493-m02_ha-064493.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-064493 cp ha-064493-m02:/home/docker/cp-test.txt ha-064493-m03:/home/docker/cp-test_ha-064493-m02_ha-064493-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-064493 ssh -n ha-064493-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-064493 ssh -n ha-064493-m03 "sudo cat /home/docker/cp-test_ha-064493-m02_ha-064493-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-064493 cp ha-064493-m02:/home/docker/cp-test.txt ha-064493-m04:/home/docker/cp-test_ha-064493-m02_ha-064493-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-064493 ssh -n ha-064493-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-064493 ssh -n ha-064493-m04 "sudo cat /home/docker/cp-test_ha-064493-m02_ha-064493-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-064493 cp testdata/cp-test.txt ha-064493-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-064493 ssh -n ha-064493-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-064493 cp ha-064493-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2459443306/001/cp-test_ha-064493-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-064493 ssh -n ha-064493-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-064493 cp ha-064493-m03:/home/docker/cp-test.txt ha-064493:/home/docker/cp-test_ha-064493-m03_ha-064493.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-064493 ssh -n ha-064493-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-064493 ssh -n ha-064493 "sudo cat /home/docker/cp-test_ha-064493-m03_ha-064493.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-064493 cp ha-064493-m03:/home/docker/cp-test.txt ha-064493-m02:/home/docker/cp-test_ha-064493-m03_ha-064493-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-064493 ssh -n ha-064493-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-064493 ssh -n ha-064493-m02 "sudo cat /home/docker/cp-test_ha-064493-m03_ha-064493-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-064493 cp ha-064493-m03:/home/docker/cp-test.txt ha-064493-m04:/home/docker/cp-test_ha-064493-m03_ha-064493-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-064493 ssh -n ha-064493-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-064493 ssh -n ha-064493-m04 "sudo cat /home/docker/cp-test_ha-064493-m03_ha-064493-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-064493 cp testdata/cp-test.txt ha-064493-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-064493 ssh -n ha-064493-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-064493 cp ha-064493-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2459443306/001/cp-test_ha-064493-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-064493 ssh -n ha-064493-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-064493 cp ha-064493-m04:/home/docker/cp-test.txt ha-064493:/home/docker/cp-test_ha-064493-m04_ha-064493.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-064493 ssh -n ha-064493-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-064493 ssh -n ha-064493 "sudo cat /home/docker/cp-test_ha-064493-m04_ha-064493.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-064493 cp ha-064493-m04:/home/docker/cp-test.txt ha-064493-m02:/home/docker/cp-test_ha-064493-m04_ha-064493-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-064493 ssh -n ha-064493-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-064493 ssh -n ha-064493-m02 "sudo cat /home/docker/cp-test_ha-064493-m04_ha-064493-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-064493 cp ha-064493-m04:/home/docker/cp-test.txt ha-064493-m03:/home/docker/cp-test_ha-064493-m04_ha-064493-m03.txt
E0429 11:52:30.300725 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-064493 ssh -n ha-064493-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-064493 ssh -n ha-064493-m03 "sudo cat /home/docker/cp-test_ha-064493-m04_ha-064493-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (18.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-064493 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-064493 node stop m02 -v=7 --alsologtostderr: (11.956394006s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-064493 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-064493 status -v=7 --alsologtostderr: exit status 7 (737.847578ms)

                                                
                                                
-- stdout --
	ha-064493
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-064493-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-064493-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-064493-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 11:52:43.317686 1280303 out.go:291] Setting OutFile to fd 1 ...
	I0429 11:52:43.317872 1280303 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 11:52:43.317881 1280303 out.go:304] Setting ErrFile to fd 2...
	I0429 11:52:43.317886 1280303 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 11:52:43.318148 1280303 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18756-1231546/.minikube/bin
	I0429 11:52:43.318349 1280303 out.go:298] Setting JSON to false
	I0429 11:52:43.318387 1280303 mustload.go:65] Loading cluster: ha-064493
	I0429 11:52:43.318476 1280303 notify.go:220] Checking for updates...
	I0429 11:52:43.319521 1280303 config.go:182] Loaded profile config "ha-064493": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 11:52:43.319546 1280303 status.go:255] checking status of ha-064493 ...
	I0429 11:52:43.320129 1280303 cli_runner.go:164] Run: docker container inspect ha-064493 --format={{.State.Status}}
	I0429 11:52:43.338134 1280303 status.go:330] ha-064493 host status = "Running" (err=<nil>)
	I0429 11:52:43.338160 1280303 host.go:66] Checking if "ha-064493" exists ...
	I0429 11:52:43.338506 1280303 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-064493
	I0429 11:52:43.356375 1280303 host.go:66] Checking if "ha-064493" exists ...
	I0429 11:52:43.356908 1280303 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 11:52:43.356977 1280303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-064493
	I0429 11:52:43.386575 1280303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34293 SSHKeyPath:/home/jenkins/minikube-integration/18756-1231546/.minikube/machines/ha-064493/id_rsa Username:docker}
	I0429 11:52:43.479040 1280303 ssh_runner.go:195] Run: systemctl --version
	I0429 11:52:43.484494 1280303 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 11:52:43.496008 1280303 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0429 11:52:43.564998 1280303 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:72 SystemTime:2024-04-29 11:52:43.553390023 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.1.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0429 11:52:43.565904 1280303 kubeconfig.go:125] found "ha-064493" server: "https://192.168.49.254:8443"
	I0429 11:52:43.565944 1280303 api_server.go:166] Checking apiserver status ...
	I0429 11:52:43.566001 1280303 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 11:52:43.578535 1280303 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1371/cgroup
	I0429 11:52:43.588321 1280303 api_server.go:182] apiserver freezer: "12:freezer:/docker/27a5a14fd9a1d9365ee1d6e5c2c1bd78bba3463cbb96e8937cc9942a2ca44fcf/crio/crio-ae3059157c6650a93444c32a2e9801e0be1071a7fac5c25f993a6de36bd60db2"
	I0429 11:52:43.588389 1280303 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/27a5a14fd9a1d9365ee1d6e5c2c1bd78bba3463cbb96e8937cc9942a2ca44fcf/crio/crio-ae3059157c6650a93444c32a2e9801e0be1071a7fac5c25f993a6de36bd60db2/freezer.state
	I0429 11:52:43.597723 1280303 api_server.go:204] freezer state: "THAWED"
	I0429 11:52:43.597750 1280303 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0429 11:52:43.606539 1280303 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0429 11:52:43.606568 1280303 status.go:422] ha-064493 apiserver status = Running (err=<nil>)
	I0429 11:52:43.606580 1280303 status.go:257] ha-064493 status: &{Name:ha-064493 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 11:52:43.606597 1280303 status.go:255] checking status of ha-064493-m02 ...
	I0429 11:52:43.606897 1280303 cli_runner.go:164] Run: docker container inspect ha-064493-m02 --format={{.State.Status}}
	I0429 11:52:43.629011 1280303 status.go:330] ha-064493-m02 host status = "Stopped" (err=<nil>)
	I0429 11:52:43.629063 1280303 status.go:343] host is not running, skipping remaining checks
	I0429 11:52:43.629072 1280303 status.go:257] ha-064493-m02 status: &{Name:ha-064493-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 11:52:43.629100 1280303 status.go:255] checking status of ha-064493-m03 ...
	I0429 11:52:43.629419 1280303 cli_runner.go:164] Run: docker container inspect ha-064493-m03 --format={{.State.Status}}
	I0429 11:52:43.643949 1280303 status.go:330] ha-064493-m03 host status = "Running" (err=<nil>)
	I0429 11:52:43.643974 1280303 host.go:66] Checking if "ha-064493-m03" exists ...
	I0429 11:52:43.644312 1280303 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-064493-m03
	I0429 11:52:43.661259 1280303 host.go:66] Checking if "ha-064493-m03" exists ...
	I0429 11:52:43.661568 1280303 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 11:52:43.661615 1280303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-064493-m03
	I0429 11:52:43.677925 1280303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34303 SSHKeyPath:/home/jenkins/minikube-integration/18756-1231546/.minikube/machines/ha-064493-m03/id_rsa Username:docker}
	I0429 11:52:43.766339 1280303 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 11:52:43.779460 1280303 kubeconfig.go:125] found "ha-064493" server: "https://192.168.49.254:8443"
	I0429 11:52:43.779491 1280303 api_server.go:166] Checking apiserver status ...
	I0429 11:52:43.779539 1280303 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 11:52:43.790539 1280303 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1314/cgroup
	I0429 11:52:43.800186 1280303 api_server.go:182] apiserver freezer: "12:freezer:/docker/3e752b6bdb1d1c1a88ee31e2219214d974377118ceba5342d898d91b03760469/crio/crio-84de301ee29d0d0d7086359c5a3315883f6aa1707b56398dcd2d20b616fb4276"
	I0429 11:52:43.800260 1280303 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/3e752b6bdb1d1c1a88ee31e2219214d974377118ceba5342d898d91b03760469/crio/crio-84de301ee29d0d0d7086359c5a3315883f6aa1707b56398dcd2d20b616fb4276/freezer.state
	I0429 11:52:43.809060 1280303 api_server.go:204] freezer state: "THAWED"
	I0429 11:52:43.809090 1280303 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0429 11:52:43.816964 1280303 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0429 11:52:43.816993 1280303 status.go:422] ha-064493-m03 apiserver status = Running (err=<nil>)
	I0429 11:52:43.817004 1280303 status.go:257] ha-064493-m03 status: &{Name:ha-064493-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 11:52:43.817045 1280303 status.go:255] checking status of ha-064493-m04 ...
	I0429 11:52:43.817362 1280303 cli_runner.go:164] Run: docker container inspect ha-064493-m04 --format={{.State.Status}}
	I0429 11:52:43.834111 1280303 status.go:330] ha-064493-m04 host status = "Running" (err=<nil>)
	I0429 11:52:43.834134 1280303 host.go:66] Checking if "ha-064493-m04" exists ...
	I0429 11:52:43.834439 1280303 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-064493-m04
	I0429 11:52:43.855680 1280303 host.go:66] Checking if "ha-064493-m04" exists ...
	I0429 11:52:43.855970 1280303 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 11:52:43.856021 1280303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-064493-m04
	I0429 11:52:43.874241 1280303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34308 SSHKeyPath:/home/jenkins/minikube-integration/18756-1231546/.minikube/machines/ha-064493-m04/id_rsa Username:docker}
	I0429 11:52:43.965997 1280303 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 11:52:43.978761 1280303 status.go:257] ha-064493-m04 status: &{Name:ha-064493-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (23.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-064493 node start m02 -v=7 --alsologtostderr
E0429 11:52:45.371164 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/functional-179378/client.crt: no such file or directory
E0429 11:52:45.376516 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/functional-179378/client.crt: no such file or directory
E0429 11:52:45.387045 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/functional-179378/client.crt: no such file or directory
E0429 11:52:45.407278 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/functional-179378/client.crt: no such file or directory
E0429 11:52:45.447547 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/functional-179378/client.crt: no such file or directory
E0429 11:52:45.527785 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/functional-179378/client.crt: no such file or directory
E0429 11:52:45.688017 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/functional-179378/client.crt: no such file or directory
E0429 11:52:46.008442 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/functional-179378/client.crt: no such file or directory
E0429 11:52:46.648616 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/functional-179378/client.crt: no such file or directory
E0429 11:52:47.928803 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/functional-179378/client.crt: no such file or directory
E0429 11:52:50.489405 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/functional-179378/client.crt: no such file or directory
E0429 11:52:55.609843 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/functional-179378/client.crt: no such file or directory
E0429 11:52:57.985032 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/client.crt: no such file or directory
E0429 11:53:05.850044 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/functional-179378/client.crt: no such file or directory
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-064493 node start m02 -v=7 --alsologtostderr: (22.36072894s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-064493 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-064493 status -v=7 --alsologtostderr: (1.318864333s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (23.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (5.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (5.793332614s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (5.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (177.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-064493 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-064493 -v=7 --alsologtostderr
E0429 11:53:26.330456 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/functional-179378/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-064493 -v=7 --alsologtostderr: (36.834688752s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-064493 --wait=true -v=7 --alsologtostderr
E0429 11:54:07.291174 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/functional-179378/client.crt: no such file or directory
E0429 11:55:29.211834 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/functional-179378/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-064493 --wait=true -v=7 --alsologtostderr: (2m20.626551179s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-064493
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (177.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (13.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-064493 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-064493 node delete m03 -v=7 --alsologtostderr: (12.14469973s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-064493 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (13.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-064493 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-064493 stop -v=7 --alsologtostderr: (35.696270042s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-064493 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-064493 status -v=7 --alsologtostderr: exit status 7 (125.889975ms)

                                                
                                                
-- stdout --
	ha-064493
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-064493-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-064493-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 11:57:01.111195 1294173 out.go:291] Setting OutFile to fd 1 ...
	I0429 11:57:01.111500 1294173 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 11:57:01.111533 1294173 out.go:304] Setting ErrFile to fd 2...
	I0429 11:57:01.111556 1294173 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 11:57:01.111862 1294173 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18756-1231546/.minikube/bin
	I0429 11:57:01.112096 1294173 out.go:298] Setting JSON to false
	I0429 11:57:01.112159 1294173 mustload.go:65] Loading cluster: ha-064493
	I0429 11:57:01.112236 1294173 notify.go:220] Checking for updates...
	I0429 11:57:01.113246 1294173 config.go:182] Loaded profile config "ha-064493": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 11:57:01.113274 1294173 status.go:255] checking status of ha-064493 ...
	I0429 11:57:01.113847 1294173 cli_runner.go:164] Run: docker container inspect ha-064493 --format={{.State.Status}}
	I0429 11:57:01.132957 1294173 status.go:330] ha-064493 host status = "Stopped" (err=<nil>)
	I0429 11:57:01.132978 1294173 status.go:343] host is not running, skipping remaining checks
	I0429 11:57:01.132986 1294173 status.go:257] ha-064493 status: &{Name:ha-064493 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 11:57:01.133033 1294173 status.go:255] checking status of ha-064493-m02 ...
	I0429 11:57:01.133337 1294173 cli_runner.go:164] Run: docker container inspect ha-064493-m02 --format={{.State.Status}}
	I0429 11:57:01.150402 1294173 status.go:330] ha-064493-m02 host status = "Stopped" (err=<nil>)
	I0429 11:57:01.150426 1294173 status.go:343] host is not running, skipping remaining checks
	I0429 11:57:01.150435 1294173 status.go:257] ha-064493-m02 status: &{Name:ha-064493-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 11:57:01.150514 1294173 status.go:255] checking status of ha-064493-m04 ...
	I0429 11:57:01.150832 1294173 cli_runner.go:164] Run: docker container inspect ha-064493-m04 --format={{.State.Status}}
	I0429 11:57:01.173875 1294173 status.go:330] ha-064493-m04 host status = "Stopped" (err=<nil>)
	I0429 11:57:01.173900 1294173 status.go:343] host is not running, skipping remaining checks
	I0429 11:57:01.173909 1294173 status.go:257] ha-064493-m04 status: &{Name:ha-064493-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (132.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-064493 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0429 11:57:30.301067 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/client.crt: no such file or directory
E0429 11:57:45.370531 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/functional-179378/client.crt: no such file or directory
E0429 11:58:13.052063 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/functional-179378/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-064493 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (2m11.747648222s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-064493 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (132.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (58.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-064493 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-064493 --control-plane -v=7 --alsologtostderr: (57.385937229s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-064493 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (58.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.76s)

                                                
                                    
x
+
TestJSONOutput/start/Command (75.14s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-675674 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-675674 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m15.141244176s)
--- PASS: TestJSONOutput/start/Command (75.14s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.77s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-675674 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.77s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-675674 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.85s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-675674 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-675674 --output=json --user=testUser: (5.851732027s)
--- PASS: TestJSONOutput/stop/Command (5.85s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-993248 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-993248 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (86.004102ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ba293852-130b-4b6f-958f-606c29fba7e1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-993248] minikube v1.33.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"21053017-240c-4a2f-95f5-bd5cd1f0d887","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18756"}}
	{"specversion":"1.0","id":"61d678e3-1a77-45c7-8e22-078a0a225528","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"9361c823-edf4-49a0-aea2-c81cd1a439e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18756-1231546/kubeconfig"}}
	{"specversion":"1.0","id":"25921ea5-5c3f-4a36-9916-bf0408b44750","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18756-1231546/.minikube"}}
	{"specversion":"1.0","id":"054ae0a9-a07b-4ea1-97b3-7c64095d9a77","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"aca37f28-5939-4ee0-9105-38aa4616dbb4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"2632d43f-4aa7-4476-80d8-bd1f2cc4e4a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-993248" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-993248
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (43.03s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-838671 --network=
E0429 12:02:30.300896 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-838671 --network=: (40.886102149s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-838671" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-838671
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-838671: (2.120134494s)
--- PASS: TestKicCustomNetwork/create_custom_network (43.03s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (36.99s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-624995 --network=bridge
E0429 12:02:45.370303 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/functional-179378/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-624995 --network=bridge: (34.892907186s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-624995" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-624995
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-624995: (2.068701571s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (36.99s)

                                                
                                    
x
+
TestKicExistingNetwork (32.24s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-996622 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-996622 --network=existing-network: (30.073729109s)
helpers_test.go:175: Cleaning up "existing-network-996622" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-996622
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-996622: (2.029611846s)
--- PASS: TestKicExistingNetwork (32.24s)

                                                
                                    
x
+
TestKicCustomSubnet (33.05s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-996168 --subnet=192.168.60.0/24
E0429 12:03:53.345275 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-996168 --subnet=192.168.60.0/24: (30.961123395s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-996168 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-996168" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-996168
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-996168: (2.063574157s)
--- PASS: TestKicCustomSubnet (33.05s)

                                                
                                    
x
+
TestKicStaticIP (36.92s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-058118 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-058118 --static-ip=192.168.200.200: (34.636783544s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-058118 ip
helpers_test.go:175: Cleaning up "static-ip-058118" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-058118
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-058118: (2.123143568s)
--- PASS: TestKicStaticIP (36.92s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (71.6s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-106024 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-106024 --driver=docker  --container-runtime=crio: (31.910974312s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-108970 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-108970 --driver=docker  --container-runtime=crio: (34.560619514s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-106024
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-108970
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-108970" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-108970
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-108970: (1.945007625s)
helpers_test.go:175: Cleaning up "first-106024" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-106024
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-106024: (1.94391461s)
--- PASS: TestMinikubeProfile (71.60s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.25s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-837201 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-837201 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.248061628s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.25s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-837201 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.03s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-850781 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-850781 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.027813628s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.03s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-850781 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-837201 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-837201 --alsologtostderr -v=5: (1.61313531s)
--- PASS: TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-850781 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-850781
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-850781: (1.199277674s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.02s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-850781
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-850781: (7.023068111s)
--- PASS: TestMountStart/serial/RestartStopped (8.02s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-850781 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (92.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-546948 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0429 12:07:30.300656 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/client.crt: no such file or directory
E0429 12:07:45.370447 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/functional-179378/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-546948 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m31.679758556s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-546948 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (92.21s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-546948 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-546948 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-546948 -- rollout status deployment/busybox: (2.986053381s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-546948 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-546948 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-546948 -- exec busybox-fc5497c4f-87ns9 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-546948 -- exec busybox-fc5497c4f-zxpld -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-546948 -- exec busybox-fc5497c4f-87ns9 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-546948 -- exec busybox-fc5497c4f-zxpld -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-546948 -- exec busybox-fc5497c4f-87ns9 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-546948 -- exec busybox-fc5497c4f-zxpld -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.05s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-546948 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-546948 -- exec busybox-fc5497c4f-87ns9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-546948 -- exec busybox-fc5497c4f-87ns9 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-546948 -- exec busybox-fc5497c4f-zxpld -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-546948 -- exec busybox-fc5497c4f-zxpld -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.08s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (46.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-546948 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-546948 -v 3 --alsologtostderr: (46.09309904s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-546948 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (46.77s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-546948 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.34s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-546948 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-546948 cp testdata/cp-test.txt multinode-546948:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-546948 ssh -n multinode-546948 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-546948 cp multinode-546948:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2152033925/001/cp-test_multinode-546948.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-546948 ssh -n multinode-546948 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-546948 cp multinode-546948:/home/docker/cp-test.txt multinode-546948-m02:/home/docker/cp-test_multinode-546948_multinode-546948-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-546948 ssh -n multinode-546948 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-546948 ssh -n multinode-546948-m02 "sudo cat /home/docker/cp-test_multinode-546948_multinode-546948-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-546948 cp multinode-546948:/home/docker/cp-test.txt multinode-546948-m03:/home/docker/cp-test_multinode-546948_multinode-546948-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-546948 ssh -n multinode-546948 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-546948 ssh -n multinode-546948-m03 "sudo cat /home/docker/cp-test_multinode-546948_multinode-546948-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-546948 cp testdata/cp-test.txt multinode-546948-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-546948 ssh -n multinode-546948-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-546948 cp multinode-546948-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2152033925/001/cp-test_multinode-546948-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-546948 ssh -n multinode-546948-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-546948 cp multinode-546948-m02:/home/docker/cp-test.txt multinode-546948:/home/docker/cp-test_multinode-546948-m02_multinode-546948.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-546948 ssh -n multinode-546948-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-546948 ssh -n multinode-546948 "sudo cat /home/docker/cp-test_multinode-546948-m02_multinode-546948.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-546948 cp multinode-546948-m02:/home/docker/cp-test.txt multinode-546948-m03:/home/docker/cp-test_multinode-546948-m02_multinode-546948-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-546948 ssh -n multinode-546948-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-546948 ssh -n multinode-546948-m03 "sudo cat /home/docker/cp-test_multinode-546948-m02_multinode-546948-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-546948 cp testdata/cp-test.txt multinode-546948-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-546948 ssh -n multinode-546948-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-546948 cp multinode-546948-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2152033925/001/cp-test_multinode-546948-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-546948 ssh -n multinode-546948-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-546948 cp multinode-546948-m03:/home/docker/cp-test.txt multinode-546948:/home/docker/cp-test_multinode-546948-m03_multinode-546948.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-546948 ssh -n multinode-546948-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-546948 ssh -n multinode-546948 "sudo cat /home/docker/cp-test_multinode-546948-m03_multinode-546948.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-546948 cp multinode-546948-m03:/home/docker/cp-test.txt multinode-546948-m02:/home/docker/cp-test_multinode-546948-m03_multinode-546948-m02.txt
E0429 12:09:08.412397 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/functional-179378/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-546948 ssh -n multinode-546948-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-546948 ssh -n multinode-546948-m02 "sudo cat /home/docker/cp-test_multinode-546948-m03_multinode-546948-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.33s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-546948 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-546948 node stop m03: (1.211489304s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-546948 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-546948 status: exit status 7 (514.719617ms)

                                                
                                                
-- stdout --
	multinode-546948
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-546948-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-546948-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-546948 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-546948 status --alsologtostderr: exit status 7 (522.147776ms)

                                                
                                                
-- stdout --
	multinode-546948
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-546948-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-546948-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 12:09:11.073724 1344640 out.go:291] Setting OutFile to fd 1 ...
	I0429 12:09:11.073907 1344640 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:09:11.073920 1344640 out.go:304] Setting ErrFile to fd 2...
	I0429 12:09:11.073925 1344640 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:09:11.074188 1344640 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18756-1231546/.minikube/bin
	I0429 12:09:11.074414 1344640 out.go:298] Setting JSON to false
	I0429 12:09:11.074445 1344640 mustload.go:65] Loading cluster: multinode-546948
	I0429 12:09:11.074500 1344640 notify.go:220] Checking for updates...
	I0429 12:09:11.074902 1344640 config.go:182] Loaded profile config "multinode-546948": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 12:09:11.074915 1344640 status.go:255] checking status of multinode-546948 ...
	I0429 12:09:11.075386 1344640 cli_runner.go:164] Run: docker container inspect multinode-546948 --format={{.State.Status}}
	I0429 12:09:11.100837 1344640 status.go:330] multinode-546948 host status = "Running" (err=<nil>)
	I0429 12:09:11.100865 1344640 host.go:66] Checking if "multinode-546948" exists ...
	I0429 12:09:11.101183 1344640 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-546948
	I0429 12:09:11.119327 1344640 host.go:66] Checking if "multinode-546948" exists ...
	I0429 12:09:11.119640 1344640 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 12:09:11.119688 1344640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-546948
	I0429 12:09:11.145975 1344640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34413 SSHKeyPath:/home/jenkins/minikube-integration/18756-1231546/.minikube/machines/multinode-546948/id_rsa Username:docker}
	I0429 12:09:11.234171 1344640 ssh_runner.go:195] Run: systemctl --version
	I0429 12:09:11.238742 1344640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 12:09:11.251075 1344640 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0429 12:09:11.306764 1344640 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:62 SystemTime:2024-04-29 12:09:11.296935678 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.1.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0429 12:09:11.307432 1344640 kubeconfig.go:125] found "multinode-546948" server: "https://192.168.67.2:8443"
	I0429 12:09:11.307516 1344640 api_server.go:166] Checking apiserver status ...
	I0429 12:09:11.307575 1344640 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 12:09:11.318860 1344640 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1343/cgroup
	I0429 12:09:11.328917 1344640 api_server.go:182] apiserver freezer: "12:freezer:/docker/7e4ef4fe24306a8c0a535187a41db4298e25e0062006b6a39f5c5519f05ff13b/crio/crio-835bcaab972dd1a457393e6c4dca4e19bfcd4d35f887b78f6cbdbedebe3f4c82"
	I0429 12:09:11.328987 1344640 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/7e4ef4fe24306a8c0a535187a41db4298e25e0062006b6a39f5c5519f05ff13b/crio/crio-835bcaab972dd1a457393e6c4dca4e19bfcd4d35f887b78f6cbdbedebe3f4c82/freezer.state
	I0429 12:09:11.337867 1344640 api_server.go:204] freezer state: "THAWED"
	I0429 12:09:11.337898 1344640 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0429 12:09:11.345370 1344640 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0429 12:09:11.345397 1344640 status.go:422] multinode-546948 apiserver status = Running (err=<nil>)
	I0429 12:09:11.345409 1344640 status.go:257] multinode-546948 status: &{Name:multinode-546948 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 12:09:11.345453 1344640 status.go:255] checking status of multinode-546948-m02 ...
	I0429 12:09:11.345790 1344640 cli_runner.go:164] Run: docker container inspect multinode-546948-m02 --format={{.State.Status}}
	I0429 12:09:11.362099 1344640 status.go:330] multinode-546948-m02 host status = "Running" (err=<nil>)
	I0429 12:09:11.362126 1344640 host.go:66] Checking if "multinode-546948-m02" exists ...
	I0429 12:09:11.362420 1344640 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-546948-m02
	I0429 12:09:11.378124 1344640 host.go:66] Checking if "multinode-546948-m02" exists ...
	I0429 12:09:11.379223 1344640 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 12:09:11.379267 1344640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-546948-m02
	I0429 12:09:11.398825 1344640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34418 SSHKeyPath:/home/jenkins/minikube-integration/18756-1231546/.minikube/machines/multinode-546948-m02/id_rsa Username:docker}
	I0429 12:09:11.485548 1344640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 12:09:11.497202 1344640 status.go:257] multinode-546948-m02 status: &{Name:multinode-546948-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0429 12:09:11.497236 1344640 status.go:255] checking status of multinode-546948-m03 ...
	I0429 12:09:11.497571 1344640 cli_runner.go:164] Run: docker container inspect multinode-546948-m03 --format={{.State.Status}}
	I0429 12:09:11.513543 1344640 status.go:330] multinode-546948-m03 host status = "Stopped" (err=<nil>)
	I0429 12:09:11.513566 1344640 status.go:343] host is not running, skipping remaining checks
	I0429 12:09:11.513574 1344640 status.go:257] multinode-546948-m03 status: &{Name:multinode-546948-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.25s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-546948 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-546948 node start m03 -v=7 --alsologtostderr: (8.947457129s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-546948 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.73s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (85.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-546948
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-546948
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-546948: (24.759833261s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-546948 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-546948 --wait=true -v=8 --alsologtostderr: (1m0.601590605s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-546948
--- PASS: TestMultiNode/serial/RestartKeepsNodes (85.51s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-546948 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-546948 node delete m03: (4.561898878s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-546948 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.22s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-546948 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-546948 stop: (23.675262748s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-546948 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-546948 status: exit status 7 (99.105338ms)

                                                
                                                
-- stdout --
	multinode-546948
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-546948-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-546948 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-546948 status --alsologtostderr: exit status 7 (96.444946ms)

                                                
                                                
-- stdout --
	multinode-546948
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-546948-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 12:11:15.808790 1351700 out.go:291] Setting OutFile to fd 1 ...
	I0429 12:11:15.808909 1351700 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:11:15.808920 1351700 out.go:304] Setting ErrFile to fd 2...
	I0429 12:11:15.808925 1351700 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:11:15.809183 1351700 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18756-1231546/.minikube/bin
	I0429 12:11:15.809358 1351700 out.go:298] Setting JSON to false
	I0429 12:11:15.809388 1351700 mustload.go:65] Loading cluster: multinode-546948
	I0429 12:11:15.809510 1351700 notify.go:220] Checking for updates...
	I0429 12:11:15.809791 1351700 config.go:182] Loaded profile config "multinode-546948": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 12:11:15.809808 1351700 status.go:255] checking status of multinode-546948 ...
	I0429 12:11:15.810290 1351700 cli_runner.go:164] Run: docker container inspect multinode-546948 --format={{.State.Status}}
	I0429 12:11:15.828321 1351700 status.go:330] multinode-546948 host status = "Stopped" (err=<nil>)
	I0429 12:11:15.828347 1351700 status.go:343] host is not running, skipping remaining checks
	I0429 12:11:15.828355 1351700 status.go:257] multinode-546948 status: &{Name:multinode-546948 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 12:11:15.828378 1351700 status.go:255] checking status of multinode-546948-m02 ...
	I0429 12:11:15.828719 1351700 cli_runner.go:164] Run: docker container inspect multinode-546948-m02 --format={{.State.Status}}
	I0429 12:11:15.845193 1351700 status.go:330] multinode-546948-m02 host status = "Stopped" (err=<nil>)
	I0429 12:11:15.845214 1351700 status.go:343] host is not running, skipping remaining checks
	I0429 12:11:15.845222 1351700 status.go:257] multinode-546948-m02 status: &{Name:multinode-546948-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.87s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (55.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-546948 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-546948 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (54.634496421s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-546948 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (55.28s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (33.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-546948
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-546948-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-546948-m02 --driver=docker  --container-runtime=crio: exit status 14 (86.075012ms)

                                                
                                                
-- stdout --
	* [multinode-546948-m02] minikube v1.33.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18756
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18756-1231546/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18756-1231546/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-546948-m02' is duplicated with machine name 'multinode-546948-m02' in profile 'multinode-546948'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-546948-m03 --driver=docker  --container-runtime=crio
E0429 12:12:30.301489 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/client.crt: no such file or directory
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-546948-m03 --driver=docker  --container-runtime=crio: (30.869313185s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-546948
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-546948: exit status 80 (383.102107ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-546948 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-546948-m03 already exists in multinode-546948-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-546948-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-546948-m03: (1.931910433s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (33.34s)

                                                
                                    
x
+
TestPreload (156.18s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-950283 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-950283 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m52.882989556s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-950283 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-950283 image pull gcr.io/k8s-minikube/busybox: (1.846903668s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-950283
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-950283: (5.80127966s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-950283 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-950283 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (33.096980994s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-950283 image list
helpers_test.go:175: Cleaning up "test-preload-950283" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-950283
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-950283: (2.293375885s)
--- PASS: TestPreload (156.18s)

                                                
                                    
x
+
TestScheduledStopUnix (110.48s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-874483 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-874483 --memory=2048 --driver=docker  --container-runtime=crio: (34.351181667s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-874483 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-874483 -n scheduled-stop-874483
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-874483 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-874483 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-874483 -n scheduled-stop-874483
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-874483
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-874483 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-874483
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-874483: exit status 7 (80.382561ms)

                                                
                                                
-- stdout --
	scheduled-stop-874483
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-874483 -n scheduled-stop-874483
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-874483 -n scheduled-stop-874483: exit status 7 (78.200994ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-874483" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-874483
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-874483: (4.490341085s)
--- PASS: TestScheduledStopUnix (110.48s)

                                                
                                    
x
+
TestInsufficientStorage (10.4s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-759801 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-759801 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.89088177s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f8825c7c-990b-46f7-a6d4-243b3f03ece3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-759801] minikube v1.33.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ab5161ba-c6d8-415b-9c47-a78158b118cf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18756"}}
	{"specversion":"1.0","id":"6b061dc5-8141-4a32-b7b2-e695fbc83f6d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"25c56711-edae-4a89-9fb8-80b3387a433a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18756-1231546/kubeconfig"}}
	{"specversion":"1.0","id":"67d6c5ec-33fe-404d-9b75-8cd2a447e828","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18756-1231546/.minikube"}}
	{"specversion":"1.0","id":"40036180-d234-42c1-817d-1f24d0d120cb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"3aec7e89-0d40-411a-b205-4f5eae097cc3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"14586e24-91ff-4e0f-8b6c-423f715d8093","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"f75b0294-0adc-41cb-bd4c-d2ff50d6cc43","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"f53f5303-b59f-4846-8c12-de8f6b64499c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"1dcd726a-1752-44bd-8e30-66d699d26219","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"5e972efc-77d0-44ae-87d7-c9b709fd11f3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-759801\" primary control-plane node in \"insufficient-storage-759801\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"894cfabe-5a21-463e-9ee9-21f0a6c093fd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.43-1713736339-18706 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"aab87228-06a8-4c1c-b564-db583f22b499","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"98b9fe2d-4de5-4289-a4fa-59beb48ac79b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-759801 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-759801 --output=json --layout=cluster: exit status 7 (286.033251ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-759801","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.33.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-759801","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 12:17:23.395398 1368291 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-759801" does not appear in /home/jenkins/minikube-integration/18756-1231546/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-759801 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-759801 --output=json --layout=cluster: exit status 7 (295.223167ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-759801","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.33.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-759801","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 12:17:23.691465 1368344 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-759801" does not appear in /home/jenkins/minikube-integration/18756-1231546/kubeconfig
	E0429 12:17:23.702781 1368344 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/insufficient-storage-759801/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-759801" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-759801
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-759801: (1.923208906s)
--- PASS: TestInsufficientStorage (10.40s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (67.53s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3343373604 start -p running-upgrade-731523 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3343373604 start -p running-upgrade-731523 --memory=2200 --vm-driver=docker  --container-runtime=crio: (38.999619246s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-731523 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-731523 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (24.436735477s)
helpers_test.go:175: Cleaning up "running-upgrade-731523" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-731523
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-731523: (2.844968287s)
--- PASS: TestRunningBinaryUpgrade (67.53s)

                                                
                                    
x
+
TestKubernetesUpgrade (390.06s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-475015 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-475015 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (58.366880344s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-475015
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-475015: (1.857670061s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-475015 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-475015 status --format={{.Host}}: exit status 7 (90.665141ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-475015 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-475015 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m46.671006834s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-475015 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-475015 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-475015 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (159.357265ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-475015] minikube v1.33.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18756
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18756-1231546/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18756-1231546/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-475015
	    minikube start -p kubernetes-upgrade-475015 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4750152 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.0, by running:
	    
	    minikube start -p kubernetes-upgrade-475015 --kubernetes-version=v1.30.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-475015 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-475015 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (39.714516264s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-475015" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-475015
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-475015: (3.065173047s)
--- PASS: TestKubernetesUpgrade (390.06s)

                                                
                                    
x
+
TestMissingContainerUpgrade (153.62s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.708952263 start -p missing-upgrade-028728 --memory=2200 --driver=docker  --container-runtime=crio
E0429 12:17:30.308844 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/client.crt: no such file or directory
E0429 12:17:45.370506 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/functional-179378/client.crt: no such file or directory
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.708952263 start -p missing-upgrade-028728 --memory=2200 --driver=docker  --container-runtime=crio: (1m13.851723503s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-028728
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-028728: (10.413756783s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-028728
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-028728 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-028728 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m5.572104946s)
helpers_test.go:175: Cleaning up "missing-upgrade-028728" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-028728
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-028728: (2.304713439s)
--- PASS: TestMissingContainerUpgrade (153.62s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-375778 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-375778 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (95.511727ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-375778] minikube v1.33.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18756
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18756-1231546/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18756-1231546/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (39.95s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-375778 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-375778 --driver=docker  --container-runtime=crio: (39.502618366s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-375778 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (39.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.74s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-375778 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-375778 --no-kubernetes --driver=docker  --container-runtime=crio: (6.345841715s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-375778 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-375778 status -o json: exit status 2 (365.428935ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-375778","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-375778
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-375778: (2.031727884s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-375778 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-375778 --no-kubernetes --driver=docker  --container-runtime=crio: (8.754956442s)
--- PASS: TestNoKubernetes/serial/Start (8.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-375778 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-375778 "sudo systemctl is-active --quiet service kubelet": exit status 1 (347.057856ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-375778
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-375778: (1.227535837s)
--- PASS: TestNoKubernetes/serial/Stop (1.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-375778 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-375778 --driver=docker  --container-runtime=crio: (7.882802198s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-375778 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-375778 "sudo systemctl is-active --quiet service kubelet": exit status 1 (379.458755ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.38s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.34s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.34s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (68.48s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.735116019 start -p stopped-upgrade-654212 --memory=2200 --vm-driver=docker  --container-runtime=crio
E0429 12:20:33.346321 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/client.crt: no such file or directory
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.735116019 start -p stopped-upgrade-654212 --memory=2200 --vm-driver=docker  --container-runtime=crio: (35.967818117s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.735116019 -p stopped-upgrade-654212 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.735116019 -p stopped-upgrade-654212 stop: (2.846567531s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-654212 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-654212 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (29.665066403s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (68.48s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.12s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-654212
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-654212: (1.122924871s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.12s)

                                                
                                    
x
+
TestPause/serial/Start (81.36s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-977301 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E0429 12:22:30.300943 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/client.crt: no such file or directory
E0429 12:22:45.370808 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/functional-179378/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-977301 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m21.357716897s)
--- PASS: TestPause/serial/Start (81.36s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (20.31s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-977301 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-977301 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (20.287085097s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (20.31s)

                                                
                                    
x
+
TestPause/serial/Pause (0.98s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-977301 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.98s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.4s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-977301 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-977301 --output=json --layout=cluster: exit status 2 (395.341999ms)

                                                
                                                
-- stdout --
	{"Name":"pause-977301","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-977301","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.40s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.86s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-977301 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.86s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.13s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-977301 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-977301 --alsologtostderr -v=5: (1.131010802s)
--- PASS: TestPause/serial/PauseAgain (1.13s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.9s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-977301 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-977301 --alsologtostderr -v=5: (2.895254359s)
--- PASS: TestPause/serial/DeletePaused (2.90s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.4s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-977301
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-977301: exit status 1 (28.445414ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-977301: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-844019 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-844019 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (268.33153ms)

                                                
                                                
-- stdout --
	* [false-844019] minikube v1.33.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18756
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18756-1231546/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18756-1231546/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 12:24:51.281178 1407037 out.go:291] Setting OutFile to fd 1 ...
	I0429 12:24:51.281377 1407037 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:24:51.281384 1407037 out.go:304] Setting ErrFile to fd 2...
	I0429 12:24:51.281389 1407037 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:24:51.281640 1407037 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18756-1231546/.minikube/bin
	I0429 12:24:51.282033 1407037 out.go:298] Setting JSON to false
	I0429 12:24:51.282992 1407037 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":29236,"bootTime":1714364256,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0429 12:24:51.283060 1407037 start.go:139] virtualization:  
	I0429 12:24:51.285849 1407037 out.go:177] * [false-844019] minikube v1.33.0 on Ubuntu 20.04 (arm64)
	I0429 12:24:51.288728 1407037 out.go:177]   - MINIKUBE_LOCATION=18756
	I0429 12:24:51.290595 1407037 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 12:24:51.288769 1407037 notify.go:220] Checking for updates...
	I0429 12:24:51.294365 1407037 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18756-1231546/kubeconfig
	I0429 12:24:51.296440 1407037 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18756-1231546/.minikube
	I0429 12:24:51.298263 1407037 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0429 12:24:51.300390 1407037 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 12:24:51.302950 1407037 config.go:182] Loaded profile config "kubernetes-upgrade-475015": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 12:24:51.303064 1407037 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 12:24:51.331451 1407037 docker.go:122] docker version: linux-26.1.0:Docker Engine - Community
	I0429 12:24:51.331573 1407037 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0429 12:24:51.452763 1407037 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:52 SystemTime:2024-04-29 12:24:51.441906106 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.1.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0429 12:24:51.452874 1407037 docker.go:295] overlay module found
	I0429 12:24:51.454785 1407037 out.go:177] * Using the docker driver based on user configuration
	I0429 12:24:51.456608 1407037 start.go:297] selected driver: docker
	I0429 12:24:51.456634 1407037 start.go:901] validating driver "docker" against <nil>
	I0429 12:24:51.456648 1407037 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 12:24:51.459050 1407037 out.go:177] 
	W0429 12:24:51.460867 1407037 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0429 12:24:51.462703 1407037 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-844019 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-844019

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-844019

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-844019

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-844019

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-844019

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-844019

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-844019

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-844019

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-844019

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-844019

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-844019"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-844019"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-844019"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-844019

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-844019"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-844019"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-844019" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-844019" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-844019" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-844019" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-844019" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-844019" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-844019" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-844019" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-844019"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-844019"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-844019"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-844019"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-844019"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-844019" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-844019" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-844019" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-844019"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-844019"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-844019"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-844019"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-844019"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18756-1231546/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 29 Apr 2024 12:24:29 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-475015
contexts:
- context:
cluster: kubernetes-upgrade-475015
extensions:
- extension:
last-update: Mon, 29 Apr 2024 12:24:29 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0
name: context_info
namespace: default
user: kubernetes-upgrade-475015
name: kubernetes-upgrade-475015
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-475015
user:
client-certificate: /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/kubernetes-upgrade-475015/client.crt
client-key: /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/kubernetes-upgrade-475015/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-844019

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-844019"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-844019"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-844019"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-844019"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-844019"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-844019"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-844019"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-844019"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-844019"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-844019"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-844019"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-844019"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-844019"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-844019"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-844019"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-844019"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-844019"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-844019"

                                                
                                                
----------------------- debugLogs end: false-844019 [took: 5.137328799s] --------------------------------
helpers_test.go:175: Cleaning up "false-844019" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-844019
--- PASS: TestNetworkPlugins/group/false (5.58s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (167.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-425197 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
E0429 12:27:30.301165 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/client.crt: no such file or directory
E0429 12:27:45.371006 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/functional-179378/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-425197 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m47.255646169s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (167.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-425197 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [961a9b41-4804-4a0a-aae8-f9d804f93528] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [961a9b41-4804-4a0a-aae8-f9d804f93528] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004394885s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-425197 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (65.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-880190 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-880190 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0: (1m5.412328979s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (65.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.81s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-425197 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-425197 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.633289823s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-425197 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.81s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (14.62s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-425197 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-425197 --alsologtostderr -v=3: (14.616507163s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (14.62s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-425197 -n old-k8s-version-425197
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-425197 -n old-k8s-version-425197: exit status 7 (110.093581ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-425197 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.45s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-880190 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7b75f14e-5128-4cd3-b460-7715d00b292e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [7b75f14e-5128-4cd3-b460-7715d00b292e] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.005338056s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-880190 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.66s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-880190 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-880190 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.507046764s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-880190 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.66s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-880190 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-880190 --alsologtostderr -v=3: (12.147215352s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-880190 -n no-preload-880190
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-880190 -n no-preload-880190: exit status 7 (81.7186ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-880190 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (288.96s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-880190 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0
E0429 12:32:30.300661 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/client.crt: no such file or directory
E0429 12:32:45.371104 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/functional-179378/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-880190 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0: (4m48.620425932s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-880190 -n no-preload-880190
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (288.96s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-htjcs" [0ef0192d-6ab3-4413-bd73-aec838205c71] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003726267s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-htjcs" [0ef0192d-6ab3-4413-bd73-aec838205c71] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004428399s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-880190 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-880190 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-880190 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-880190 -n no-preload-880190
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-880190 -n no-preload-880190: exit status 2 (332.012829ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-880190 -n no-preload-880190
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-880190 -n no-preload-880190: exit status 2 (323.657323ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-880190 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-880190 -n no-preload-880190
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-880190 -n no-preload-880190
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (83.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-635140 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-635140 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0: (1m23.354995303s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (83.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-jjpdf" [93cde5f7-d140-4458-9ff0-fa9da609e48c] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005254445s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-jjpdf" [93cde5f7-d140-4458-9ff0-fa9da609e48c] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004453824s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-425197 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-425197 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.85s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-425197 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-425197 --alsologtostderr -v=1: (1.255940737s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-425197 -n old-k8s-version-425197
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-425197 -n old-k8s-version-425197: exit status 2 (397.770621ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-425197 -n old-k8s-version-425197
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-425197 -n old-k8s-version-425197: exit status 2 (407.609876ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-425197 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-425197 -n old-k8s-version-425197
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-425197 -n old-k8s-version-425197
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.85s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (83.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-206345 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-206345 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0: (1m23.434681454s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (83.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-635140 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e0fbf394-a00e-4376-af9e-4574ff08fc0a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e0fbf394-a00e-4376-af9e-4574ff08fc0a] Running
E0429 12:37:13.347373 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.00352045s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-635140 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-635140 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-635140 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.005309543s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-635140 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-635140 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-635140 --alsologtostderr -v=3: (11.981447079s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-635140 -n embed-certs-635140
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-635140 -n embed-certs-635140: exit status 7 (78.077362ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-635140 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (302.82s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-635140 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0
E0429 12:37:30.300995 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-635140 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0: (5m2.460331851s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-635140 -n embed-certs-635140
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (302.82s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-206345 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [214fe9cc-6ad0-4479-9b0b-81ff443262a0] Pending
helpers_test.go:344: "busybox" [214fe9cc-6ad0-4479-9b0b-81ff443262a0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [214fe9cc-6ad0-4479-9b0b-81ff443262a0] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003055651s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-206345 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.78s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-206345 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-206345 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.579255825s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-206345 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.78s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-206345 --alsologtostderr -v=3
E0429 12:37:45.370941 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/functional-179378/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-206345 --alsologtostderr -v=3: (12.280772884s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-206345 -n default-k8s-diff-port-206345
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-206345 -n default-k8s-diff-port-206345: exit status 7 (79.688733ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-206345 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (307.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-206345 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0
E0429 12:39:03.599680 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/old-k8s-version-425197/client.crt: no such file or directory
E0429 12:39:03.604980 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/old-k8s-version-425197/client.crt: no such file or directory
E0429 12:39:03.615196 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/old-k8s-version-425197/client.crt: no such file or directory
E0429 12:39:03.635534 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/old-k8s-version-425197/client.crt: no such file or directory
E0429 12:39:03.675850 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/old-k8s-version-425197/client.crt: no such file or directory
E0429 12:39:03.756237 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/old-k8s-version-425197/client.crt: no such file or directory
E0429 12:39:03.916579 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/old-k8s-version-425197/client.crt: no such file or directory
E0429 12:39:04.237180 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/old-k8s-version-425197/client.crt: no such file or directory
E0429 12:39:04.877511 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/old-k8s-version-425197/client.crt: no such file or directory
E0429 12:39:06.158169 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/old-k8s-version-425197/client.crt: no such file or directory
E0429 12:39:08.718754 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/old-k8s-version-425197/client.crt: no such file or directory
E0429 12:39:13.839522 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/old-k8s-version-425197/client.crt: no such file or directory
E0429 12:39:24.080212 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/old-k8s-version-425197/client.crt: no such file or directory
E0429 12:39:44.560414 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/old-k8s-version-425197/client.crt: no such file or directory
E0429 12:40:11.265404 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/no-preload-880190/client.crt: no such file or directory
E0429 12:40:11.270683 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/no-preload-880190/client.crt: no such file or directory
E0429 12:40:11.281090 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/no-preload-880190/client.crt: no such file or directory
E0429 12:40:11.301371 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/no-preload-880190/client.crt: no such file or directory
E0429 12:40:11.341647 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/no-preload-880190/client.crt: no such file or directory
E0429 12:40:11.422096 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/no-preload-880190/client.crt: no such file or directory
E0429 12:40:11.582424 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/no-preload-880190/client.crt: no such file or directory
E0429 12:40:11.903035 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/no-preload-880190/client.crt: no such file or directory
E0429 12:40:12.543447 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/no-preload-880190/client.crt: no such file or directory
E0429 12:40:13.824541 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/no-preload-880190/client.crt: no such file or directory
E0429 12:40:16.385742 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/no-preload-880190/client.crt: no such file or directory
E0429 12:40:21.505955 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/no-preload-880190/client.crt: no such file or directory
E0429 12:40:25.521242 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/old-k8s-version-425197/client.crt: no such file or directory
E0429 12:40:31.747057 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/no-preload-880190/client.crt: no such file or directory
E0429 12:40:52.227270 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/no-preload-880190/client.crt: no such file or directory
E0429 12:41:33.187483 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/no-preload-880190/client.crt: no such file or directory
E0429 12:41:47.441425 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/old-k8s-version-425197/client.crt: no such file or directory
E0429 12:42:28.414112 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/functional-179378/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-206345 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0: (5m6.873136175s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-206345 -n default-k8s-diff-port-206345
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (307.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-n8d7r" [ca560c56-9ce5-4379-aa80-ce8f4150dfb4] Running
E0429 12:42:30.301420 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/addons-760922/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003618836s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-n8d7r" [ca560c56-9ce5-4379-aa80-ce8f4150dfb4] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003231666s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-635140 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-635140 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-635140 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-635140 -n embed-certs-635140
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-635140 -n embed-certs-635140: exit status 2 (376.040463ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-635140 -n embed-certs-635140
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-635140 -n embed-certs-635140: exit status 2 (327.110737ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-635140 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-635140 -n embed-certs-635140
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-635140 -n embed-certs-635140
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (44.58s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-389953 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0
E0429 12:42:55.108659 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/no-preload-880190/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-389953 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0: (44.584492526s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (44.58s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-cjvb6" [e634ea82-e5f8-494b-8510-d145bd01624e] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004523531s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-cjvb6" [e634ea82-e5f8-494b-8510-d145bd01624e] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004664756s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-206345 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-206345 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-206345 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-206345 --alsologtostderr -v=1: (1.161341435s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-206345 -n default-k8s-diff-port-206345
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-206345 -n default-k8s-diff-port-206345: exit status 2 (377.986495ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-206345 -n default-k8s-diff-port-206345
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-206345 -n default-k8s-diff-port-206345: exit status 2 (409.371557ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-206345 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-206345 -n default-k8s-diff-port-206345
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-206345 -n default-k8s-diff-port-206345
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (88.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-844019 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-844019 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m28.779116009s)
--- PASS: TestNetworkPlugins/group/auto/Start (88.78s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.6s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-389953 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-389953 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.600363966s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.60s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-389953 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-389953 --alsologtostderr -v=3: (1.312158495s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-389953 -n newest-cni-389953
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-389953 -n newest-cni-389953: exit status 7 (109.859918ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-389953 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (21.61s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-389953 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-389953 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0: (21.073322866s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-389953 -n newest-cni-389953
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (21.61s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.43s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-389953 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.43s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.93s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-389953 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p newest-cni-389953 --alsologtostderr -v=1: (1.027795316s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-389953 -n newest-cni-389953
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-389953 -n newest-cni-389953: exit status 2 (436.381004ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-389953 -n newest-cni-389953
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-389953 -n newest-cni-389953: exit status 2 (363.848442ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-389953 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-389953 -n newest-cni-389953
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-389953 -n newest-cni-389953
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.93s)
E0429 12:49:50.699447 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/auto-844019/client.crt: no such file or directory
E0429 12:49:50.704684 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/auto-844019/client.crt: no such file or directory
E0429 12:49:50.714961 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/auto-844019/client.crt: no such file or directory
E0429 12:49:50.735181 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/auto-844019/client.crt: no such file or directory
E0429 12:49:50.775410 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/auto-844019/client.crt: no such file or directory
E0429 12:49:50.855759 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/auto-844019/client.crt: no such file or directory
E0429 12:49:51.016100 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/auto-844019/client.crt: no such file or directory
E0429 12:49:51.336807 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/auto-844019/client.crt: no such file or directory
E0429 12:49:51.977855 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/auto-844019/client.crt: no such file or directory
E0429 12:49:53.258325 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/auto-844019/client.crt: no such file or directory
E0429 12:49:54.623468 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/kindnet-844019/client.crt: no such file or directory
E0429 12:49:54.628719 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/kindnet-844019/client.crt: no such file or directory
E0429 12:49:54.638929 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/kindnet-844019/client.crt: no such file or directory
E0429 12:49:54.659170 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/kindnet-844019/client.crt: no such file or directory
E0429 12:49:54.699435 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/kindnet-844019/client.crt: no such file or directory
E0429 12:49:54.779739 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/kindnet-844019/client.crt: no such file or directory
E0429 12:49:54.940101 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/kindnet-844019/client.crt: no such file or directory
E0429 12:49:55.260644 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/kindnet-844019/client.crt: no such file or directory
E0429 12:49:55.819102 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/auto-844019/client.crt: no such file or directory
E0429 12:49:55.901293 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/kindnet-844019/client.crt: no such file or directory
E0429 12:49:57.181873 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/kindnet-844019/client.crt: no such file or directory
E0429 12:49:59.742415 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/kindnet-844019/client.crt: no such file or directory
E0429 12:50:00.939866 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/auto-844019/client.crt: no such file or directory
E0429 12:50:04.863089 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/kindnet-844019/client.crt: no such file or directory
E0429 12:50:11.180861 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/auto-844019/client.crt: no such file or directory
E0429 12:50:11.265331 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/no-preload-880190/client.crt: no such file or directory
E0429 12:50:14.651285 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/default-k8s-diff-port-206345/client.crt: no such file or directory
E0429 12:50:15.103275 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/kindnet-844019/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (51.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-844019 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E0429 12:44:03.598954 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/old-k8s-version-425197/client.crt: no such file or directory
E0429 12:44:31.282088 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/old-k8s-version-425197/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-844019 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (51.11336671s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (51.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-844019 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-844019 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-ggljz" [130d3ac1-7f0c-4c2f-b6e6-a5f5a60082af] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-ggljz" [130d3ac1-7f0c-4c2f-b6e6-a5f5a60082af] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004659333s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-p7lw8" [f641e6d8-28ba-424b-9701-af7601799e25] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005389025s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-844019 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-844019 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-844019 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-844019 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-gws7g" [80670c27-489e-48e8-afdd-38dbb6150a2e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-gws7g" [80670c27-489e-48e8-afdd-38dbb6150a2e] Running
E0429 12:45:11.264850 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/no-preload-880190/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004835929s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-844019 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-844019 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-844019 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-844019 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (72.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-844019 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-844019 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m12.505907173s)
--- PASS: TestNetworkPlugins/group/flannel/Start (72.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (89.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-844019 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E0429 12:45:38.948954 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/no-preload-880190/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-844019 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m29.249547474s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (89.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-vcbmd" [2cd7ec33-b961-4f9e-93cd-e7734c6cb03b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004309069s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-844019 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-844019 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-djpsf" [6fe83cef-abd3-4236-ba2c-915c1ab112a8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-djpsf" [6fe83cef-abd3-4236-ba2c-915c1ab112a8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.003839356s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-844019 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-844019 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-844019 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-844019 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-844019 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-pgpdz" [3b2d8472-165c-4e11-92db-ed7468ede685] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-pgpdz" [3b2d8472-165c-4e11-92db-ed7468ede685] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.004329531s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (91.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-844019 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-844019 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m31.947105915s)
--- PASS: TestNetworkPlugins/group/bridge/Start (91.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-844019 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-844019 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-844019 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (75.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-844019 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E0429 12:47:45.370522 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/functional-179378/client.crt: no such file or directory
E0429 12:47:51.288876 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/default-k8s-diff-port-206345/client.crt: no such file or directory
E0429 12:48:11.769524 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/default-k8s-diff-port-206345/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-844019 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m15.272656898s)
--- PASS: TestNetworkPlugins/group/calico/Start (75.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-844019 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-844019 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-jdw2z" [a9d3916a-ed77-4c43-8d68-2f5e9ee2adee] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0429 12:48:52.730336 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/default-k8s-diff-port-206345/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-jdw2z" [a9d3916a-ed77-4c43-8d68-2f5e9ee2adee] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.003857156s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-7972d" [3df3cfb9-5974-47c9-9920-3a838a4b1b6e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.007079386s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-844019 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-844019 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-844019 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-844019 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-844019 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-vxbwq" [ece5f29f-adb8-4d92-8d7e-4e61784a5c38] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-vxbwq" [ece5f29f-adb8-4d92-8d7e-4e61784a5c38] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.004176335s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-844019 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-844019 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-844019 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (63.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-844019 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-844019 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m3.088304129s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (63.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-844019 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-844019 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-tskgp" [b5055492-af51-496d-a05b-80a1f2b6c589] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-tskgp" [b5055492-af51-496d-a05b-80a1f2b6c589] Running
E0429 12:50:31.661100 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/auto-844019/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.004256944s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-844019 exec deployment/netcat -- nslookup kubernetes.default
E0429 12:50:35.584418 1236974 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/kindnet-844019/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-844019 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-844019 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    

Test skip (29/327)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.57s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-209390 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-209390" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-209390
--- SKIP: TestDownloadOnlyKic (0.57s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:444: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-720552" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-720552
--- SKIP: TestStartStop/group/disable-driver-mounts (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-844019 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-844019

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-844019

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-844019

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-844019

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-844019

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-844019

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-844019

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-844019

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-844019

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-844019

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-844019"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-844019"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-844019"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-844019

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-844019"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-844019"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-844019" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-844019" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-844019" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-844019" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-844019" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-844019" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-844019" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-844019" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-844019"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-844019"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-844019"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-844019"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-844019"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-844019" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-844019" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-844019" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-844019"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-844019"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-844019"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-844019"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-844019"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18756-1231546/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 29 Apr 2024 12:24:29 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-475015
contexts:
- context:
cluster: kubernetes-upgrade-475015
extensions:
- extension:
last-update: Mon, 29 Apr 2024 12:24:29 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0
name: context_info
namespace: default
user: kubernetes-upgrade-475015
name: kubernetes-upgrade-475015
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-475015
user:
client-certificate: /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/kubernetes-upgrade-475015/client.crt
client-key: /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/kubernetes-upgrade-475015/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-844019

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-844019"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-844019"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-844019"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-844019"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-844019"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-844019"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-844019"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-844019"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-844019"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-844019"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-844019"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-844019"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-844019"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-844019"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-844019"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-844019"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-844019"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-844019"

                                                
                                                
----------------------- debugLogs end: kubenet-844019 [took: 4.463371457s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-844019" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-844019
--- SKIP: TestNetworkPlugins/group/kubenet (4.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-844019 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-844019

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-844019

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-844019

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-844019

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-844019

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-844019

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-844019

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-844019

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-844019

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-844019

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-844019"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-844019"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-844019"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-844019

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-844019"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-844019"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-844019" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-844019" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-844019" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-844019" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-844019" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-844019" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-844019" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-844019" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-844019"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-844019"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-844019"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-844019"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-844019"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-844019

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-844019

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-844019" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-844019" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-844019

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-844019

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-844019" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-844019" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-844019" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-844019" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-844019" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-844019"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-844019"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-844019"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-844019"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-844019"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18756-1231546/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 29 Apr 2024 12:24:29 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-475015
contexts:
- context:
cluster: kubernetes-upgrade-475015
extensions:
- extension:
last-update: Mon, 29 Apr 2024 12:24:29 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0
name: context_info
namespace: default
user: kubernetes-upgrade-475015
name: kubernetes-upgrade-475015
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-475015
user:
client-certificate: /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/kubernetes-upgrade-475015/client.crt
client-key: /home/jenkins/minikube-integration/18756-1231546/.minikube/profiles/kubernetes-upgrade-475015/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-844019

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-844019"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-844019"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-844019"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-844019"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-844019"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-844019"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-844019"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-844019"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-844019"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-844019"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-844019"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-844019"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-844019"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-844019"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-844019"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-844019"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-844019"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-844019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-844019"

                                                
                                                
----------------------- debugLogs end: cilium-844019 [took: 4.320788695s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-844019" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-844019
--- SKIP: TestNetworkPlugins/group/cilium (4.48s)

                                                
                                    
Copied to clipboard