Test Report: Docker_Linux_containerd_arm64 19084

                    
                      7ef7da66050fbee35d8f820fabec0ee963fd337e:2024-06-17:34930
                    
                

Test fail (7/328)

x
+
TestAddons/parallel/Ingress (35.71s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-134601 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-134601 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-134601 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [a36d1bf1-332f-4b0c-ad12-aa1c7e879d9b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [a36d1bf1-332f-4b0c-ad12-aa1c7e879d9b] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.003525526s
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-134601 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-134601 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-134601 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:299: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.060737817s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:301: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:305: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p addons-134601 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-arm64 -p addons-134601 addons disable ingress-dns --alsologtostderr -v=1: (1.146268131s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-arm64 -p addons-134601 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-arm64 -p addons-134601 addons disable ingress --alsologtostderr -v=1: (7.735286876s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-134601
helpers_test.go:235: (dbg) docker inspect addons-134601:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b66644773df38e1f944cd9afaef13e4e0a73693afd12db12fa91abe1e2ad43e4",
	        "Created": "2024-06-17T11:36:20.171228922Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 692357,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-06-17T11:36:20.466971903Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:d36081176f43c9443534fbd23d834d14507b037430e066481145283247762ade",
	        "ResolvConfPath": "/var/lib/docker/containers/b66644773df38e1f944cd9afaef13e4e0a73693afd12db12fa91abe1e2ad43e4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b66644773df38e1f944cd9afaef13e4e0a73693afd12db12fa91abe1e2ad43e4/hostname",
	        "HostsPath": "/var/lib/docker/containers/b66644773df38e1f944cd9afaef13e4e0a73693afd12db12fa91abe1e2ad43e4/hosts",
	        "LogPath": "/var/lib/docker/containers/b66644773df38e1f944cd9afaef13e4e0a73693afd12db12fa91abe1e2ad43e4/b66644773df38e1f944cd9afaef13e4e0a73693afd12db12fa91abe1e2ad43e4-json.log",
	        "Name": "/addons-134601",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-134601:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-134601",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/6792451e58e7e46a9a1769026ff397eb6b738b9a078679877aa89e3dad14a06e-init/diff:/var/lib/docker/overlay2/c07c2f412fc737ec224babdeaebc84a76c392761a424a81f6ee0a5caa5d8373f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6792451e58e7e46a9a1769026ff397eb6b738b9a078679877aa89e3dad14a06e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6792451e58e7e46a9a1769026ff397eb6b738b9a078679877aa89e3dad14a06e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6792451e58e7e46a9a1769026ff397eb6b738b9a078679877aa89e3dad14a06e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-134601",
	                "Source": "/var/lib/docker/volumes/addons-134601/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-134601",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-134601",
	                "name.minikube.sigs.k8s.io": "addons-134601",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5613be09e0f93124dfa47f8447c2e4db678798ab34d6d668314304667ee64086",
	            "SandboxKey": "/var/run/docker/netns/5613be09e0f9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33537"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33536"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33533"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33535"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33534"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-134601": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "080dd99e834a11070ccfccd4e76755df6a40d19a84e7196555666583eb000a1d",
	                    "EndpointID": "f268cf856b5e532f3bb436a09d8678af25a2a4c6efd7e43973316829ac9bfe98",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "addons-134601",
	                        "b66644773df3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-134601 -n addons-134601
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-134601 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-134601 logs -n 25: (1.401862546s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-994764                                                                     | download-only-994764   | jenkins | v1.33.1 | 17 Jun 24 11:35 UTC | 17 Jun 24 11:35 UTC |
	| delete  | -p download-only-968605                                                                     | download-only-968605   | jenkins | v1.33.1 | 17 Jun 24 11:35 UTC | 17 Jun 24 11:35 UTC |
	| start   | --download-only -p                                                                          | download-docker-079460 | jenkins | v1.33.1 | 17 Jun 24 11:35 UTC |                     |
	|         | download-docker-079460                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |         |                     |                     |
	| delete  | -p download-docker-079460                                                                   | download-docker-079460 | jenkins | v1.33.1 | 17 Jun 24 11:35 UTC | 17 Jun 24 11:35 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-788244   | jenkins | v1.33.1 | 17 Jun 24 11:35 UTC |                     |
	|         | binary-mirror-788244                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:36423                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-788244                                                                     | binary-mirror-788244   | jenkins | v1.33.1 | 17 Jun 24 11:35 UTC | 17 Jun 24 11:35 UTC |
	| addons  | enable dashboard -p                                                                         | addons-134601          | jenkins | v1.33.1 | 17 Jun 24 11:35 UTC |                     |
	|         | addons-134601                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-134601          | jenkins | v1.33.1 | 17 Jun 24 11:35 UTC |                     |
	|         | addons-134601                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-134601 --wait=true                                                                | addons-134601          | jenkins | v1.33.1 | 17 Jun 24 11:35 UTC | 17 Jun 24 11:38 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-134601          | jenkins | v1.33.1 | 17 Jun 24 11:38 UTC | 17 Jun 24 11:38 UTC |
	|         | -p addons-134601                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-134601 ip                                                                            | addons-134601          | jenkins | v1.33.1 | 17 Jun 24 11:38 UTC | 17 Jun 24 11:38 UTC |
	| addons  | addons-134601 addons disable                                                                | addons-134601          | jenkins | v1.33.1 | 17 Jun 24 11:38 UTC | 17 Jun 24 11:38 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-134601          | jenkins | v1.33.1 | 17 Jun 24 11:38 UTC | 17 Jun 24 11:38 UTC |
	|         | -p addons-134601                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-134601 ssh cat                                                                       | addons-134601          | jenkins | v1.33.1 | 17 Jun 24 11:39 UTC | 17 Jun 24 11:39 UTC |
	|         | /opt/local-path-provisioner/pvc-cb684f52-f0cd-415f-a4e5-c14b80d7b47b_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-134601 addons disable                                                                | addons-134601          | jenkins | v1.33.1 | 17 Jun 24 11:39 UTC | 17 Jun 24 11:39 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-134601          | jenkins | v1.33.1 | 17 Jun 24 11:39 UTC | 17 Jun 24 11:39 UTC |
	|         | addons-134601                                                                               |                        |         |         |                     |                     |
	| addons  | addons-134601 addons                                                                        | addons-134601          | jenkins | v1.33.1 | 17 Jun 24 11:40 UTC | 17 Jun 24 11:40 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-134601 addons                                                                        | addons-134601          | jenkins | v1.33.1 | 17 Jun 24 11:40 UTC | 17 Jun 24 11:40 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-134601          | jenkins | v1.33.1 | 17 Jun 24 11:40 UTC | 17 Jun 24 11:40 UTC |
	|         | addons-134601                                                                               |                        |         |         |                     |                     |
	| addons  | addons-134601 addons                                                                        | addons-134601          | jenkins | v1.33.1 | 17 Jun 24 11:41 UTC | 17 Jun 24 11:41 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-134601 ssh curl -s                                                                   | addons-134601          | jenkins | v1.33.1 | 17 Jun 24 11:41 UTC | 17 Jun 24 11:41 UTC |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-134601 ip                                                                            | addons-134601          | jenkins | v1.33.1 | 17 Jun 24 11:41 UTC | 17 Jun 24 11:41 UTC |
	| addons  | addons-134601 addons disable                                                                | addons-134601          | jenkins | v1.33.1 | 17 Jun 24 11:41 UTC | 17 Jun 24 11:41 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-134601 addons disable                                                                | addons-134601          | jenkins | v1.33.1 | 17 Jun 24 11:41 UTC | 17 Jun 24 11:41 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-134601 addons disable                                                                | addons-134601          | jenkins | v1.33.1 | 17 Jun 24 11:41 UTC | 17 Jun 24 11:41 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/17 11:35:55
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.22.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0617 11:35:55.192447  691879 out.go:291] Setting OutFile to fd 1 ...
	I0617 11:35:55.192927  691879 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:35:55.192948  691879 out.go:304] Setting ErrFile to fd 2...
	I0617 11:35:55.192954  691879 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:35:55.193286  691879 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19084-685849/.minikube/bin
	I0617 11:35:55.193820  691879 out.go:298] Setting JSON to false
	I0617 11:35:55.194912  691879 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11903,"bootTime":1718612253,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1063-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0617 11:35:55.195026  691879 start.go:139] virtualization:  
	I0617 11:35:55.197506  691879 out.go:177] * [addons-134601] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0617 11:35:55.199793  691879 out.go:177]   - MINIKUBE_LOCATION=19084
	I0617 11:35:55.199837  691879 notify.go:220] Checking for updates...
	I0617 11:35:55.202472  691879 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 11:35:55.204885  691879 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19084-685849/kubeconfig
	I0617 11:35:55.206801  691879 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19084-685849/.minikube
	I0617 11:35:55.208549  691879 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0617 11:35:55.210345  691879 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0617 11:35:55.212409  691879 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 11:35:55.231465  691879 docker.go:122] docker version: linux-26.1.4:Docker Engine - Community
	I0617 11:35:55.231585  691879 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0617 11:35:55.296292  691879 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-06-17 11:35:55.287047442 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1063-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214892544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d2d58213f83a351ca8f528a95fbd145f5654e957 Expected:d2d58213f83a351ca8f528a95fbd145f5654e957} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.1]] Warnings:<nil>}}
	I0617 11:35:55.296406  691879 docker.go:295] overlay module found
	I0617 11:35:55.298656  691879 out.go:177] * Using the docker driver based on user configuration
	I0617 11:35:55.300522  691879 start.go:297] selected driver: docker
	I0617 11:35:55.300547  691879 start.go:901] validating driver "docker" against <nil>
	I0617 11:35:55.300560  691879 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0617 11:35:55.301226  691879 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0617 11:35:55.354309  691879 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-06-17 11:35:55.345320229 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1063-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214892544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d2d58213f83a351ca8f528a95fbd145f5654e957 Expected:d2d58213f83a351ca8f528a95fbd145f5654e957} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.1]] Warnings:<nil>}}
	I0617 11:35:55.354478  691879 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0617 11:35:55.354725  691879 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0617 11:35:55.356740  691879 out.go:177] * Using Docker driver with root privileges
	I0617 11:35:55.358586  691879 cni.go:84] Creating CNI manager for ""
	I0617 11:35:55.358610  691879 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0617 11:35:55.358621  691879 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0617 11:35:55.358705  691879 start.go:340] cluster config:
	{Name:addons-134601 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:addons-134601 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 11:35:55.361198  691879 out.go:177] * Starting "addons-134601" primary control-plane node in "addons-134601" cluster
	I0617 11:35:55.363180  691879 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0617 11:35:55.365410  691879 out.go:177] * Pulling base image v0.0.44-1718296336-19068 ...
	I0617 11:35:55.367161  691879 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime containerd
	I0617 11:35:55.367214  691879 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19084-685849/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-containerd-overlay2-arm64.tar.lz4
	I0617 11:35:55.367228  691879 cache.go:56] Caching tarball of preloaded images
	I0617 11:35:55.367252  691879 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 in local docker daemon
	I0617 11:35:55.367324  691879 preload.go:173] Found /home/jenkins/minikube-integration/19084-685849/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0617 11:35:55.367334  691879 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on containerd
	I0617 11:35:55.367694  691879 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/addons-134601/config.json ...
	I0617 11:35:55.367749  691879 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/addons-134601/config.json: {Name:mkb08377f469fc1104db1717c4a08d3c7ada3e87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 11:35:55.381897  691879 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 to local cache
	I0617 11:35:55.382021  691879 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 in local cache directory
	I0617 11:35:55.382044  691879 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 in local cache directory, skipping pull
	I0617 11:35:55.382052  691879 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 exists in cache, skipping pull
	I0617 11:35:55.382060  691879 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 as a tarball
	I0617 11:35:55.382067  691879 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 from local cache
	I0617 11:36:12.587150  691879 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 from cached tarball
	I0617 11:36:12.587192  691879 cache.go:194] Successfully downloaded all kic artifacts
	I0617 11:36:12.587234  691879 start.go:360] acquireMachinesLock for addons-134601: {Name:mk09459f563f6a675c00ab1b69c0357e1205feb9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 11:36:12.587359  691879 start.go:364] duration metric: took 101.75µs to acquireMachinesLock for "addons-134601"
	I0617 11:36:12.587389  691879 start.go:93] Provisioning new machine with config: &{Name:addons-134601 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:addons-134601 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0617 11:36:12.587531  691879 start.go:125] createHost starting for "" (driver="docker")
	I0617 11:36:12.589912  691879 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0617 11:36:12.590182  691879 start.go:159] libmachine.API.Create for "addons-134601" (driver="docker")
	I0617 11:36:12.590216  691879 client.go:168] LocalClient.Create starting
	I0617 11:36:12.590322  691879 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19084-685849/.minikube/certs/ca.pem
	I0617 11:36:13.676399  691879 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19084-685849/.minikube/certs/cert.pem
	I0617 11:36:14.290096  691879 cli_runner.go:164] Run: docker network inspect addons-134601 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0617 11:36:14.305139  691879 cli_runner.go:211] docker network inspect addons-134601 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0617 11:36:14.305221  691879 network_create.go:281] running [docker network inspect addons-134601] to gather additional debugging logs...
	I0617 11:36:14.305243  691879 cli_runner.go:164] Run: docker network inspect addons-134601
	W0617 11:36:14.319894  691879 cli_runner.go:211] docker network inspect addons-134601 returned with exit code 1
	I0617 11:36:14.319927  691879 network_create.go:284] error running [docker network inspect addons-134601]: docker network inspect addons-134601: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-134601 not found
	I0617 11:36:14.319941  691879 network_create.go:286] output of [docker network inspect addons-134601]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-134601 not found
	
	** /stderr **
	I0617 11:36:14.320055  691879 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0617 11:36:14.334621  691879 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40004e0010}
	I0617 11:36:14.334663  691879 network_create.go:124] attempt to create docker network addons-134601 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0617 11:36:14.334721  691879 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-134601 addons-134601
	I0617 11:36:14.408328  691879 network_create.go:108] docker network addons-134601 192.168.49.0/24 created
	I0617 11:36:14.408359  691879 kic.go:121] calculated static IP "192.168.49.2" for the "addons-134601" container
	I0617 11:36:14.408437  691879 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0617 11:36:14.422874  691879 cli_runner.go:164] Run: docker volume create addons-134601 --label name.minikube.sigs.k8s.io=addons-134601 --label created_by.minikube.sigs.k8s.io=true
	I0617 11:36:14.438772  691879 oci.go:103] Successfully created a docker volume addons-134601
	I0617 11:36:14.438870  691879 cli_runner.go:164] Run: docker run --rm --name addons-134601-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-134601 --entrypoint /usr/bin/test -v addons-134601:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 -d /var/lib
	I0617 11:36:15.949810  691879 cli_runner.go:217] Completed: docker run --rm --name addons-134601-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-134601 --entrypoint /usr/bin/test -v addons-134601:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 -d /var/lib: (1.510886406s)
	I0617 11:36:15.949837  691879 oci.go:107] Successfully prepared a docker volume addons-134601
	I0617 11:36:15.949861  691879 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime containerd
	I0617 11:36:15.949881  691879 kic.go:194] Starting extracting preloaded images to volume ...
	I0617 11:36:15.949978  691879 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19084-685849/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-134601:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 -I lz4 -xf /preloaded.tar -C /extractDir
	I0617 11:36:20.107495  691879 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19084-685849/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-134601:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.157472664s)
	I0617 11:36:20.107533  691879 kic.go:203] duration metric: took 4.157647085s to extract preloaded images to volume ...
	W0617 11:36:20.107715  691879 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0617 11:36:20.107836  691879 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0617 11:36:20.156985  691879 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-134601 --name addons-134601 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-134601 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-134601 --network addons-134601 --ip 192.168.49.2 --volume addons-134601:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8
	I0617 11:36:20.474916  691879 cli_runner.go:164] Run: docker container inspect addons-134601 --format={{.State.Running}}
	I0617 11:36:20.501230  691879 cli_runner.go:164] Run: docker container inspect addons-134601 --format={{.State.Status}}
	I0617 11:36:20.522729  691879 cli_runner.go:164] Run: docker exec addons-134601 stat /var/lib/dpkg/alternatives/iptables
	I0617 11:36:20.585999  691879 oci.go:144] the created container "addons-134601" has a running status.
	I0617 11:36:20.586025  691879 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19084-685849/.minikube/machines/addons-134601/id_rsa...
	I0617 11:36:20.998617  691879 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19084-685849/.minikube/machines/addons-134601/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0617 11:36:21.045805  691879 cli_runner.go:164] Run: docker container inspect addons-134601 --format={{.State.Status}}
	I0617 11:36:21.077013  691879 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0617 11:36:21.077031  691879 kic_runner.go:114] Args: [docker exec --privileged addons-134601 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0617 11:36:21.154204  691879 cli_runner.go:164] Run: docker container inspect addons-134601 --format={{.State.Status}}
	I0617 11:36:21.178000  691879 machine.go:94] provisionDockerMachine start ...
	I0617 11:36:21.178087  691879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-134601
	I0617 11:36:21.202309  691879 main.go:141] libmachine: Using SSH client type: native
	I0617 11:36:21.202574  691879 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2bb0] 0x3e5410 <nil>  [] 0s} 127.0.0.1 33537 <nil> <nil>}
	I0617 11:36:21.202584  691879 main.go:141] libmachine: About to run SSH command:
	hostname
	I0617 11:36:21.351164  691879 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-134601
	
	I0617 11:36:21.351228  691879 ubuntu.go:169] provisioning hostname "addons-134601"
	I0617 11:36:21.351321  691879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-134601
	I0617 11:36:21.373088  691879 main.go:141] libmachine: Using SSH client type: native
	I0617 11:36:21.373323  691879 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2bb0] 0x3e5410 <nil>  [] 0s} 127.0.0.1 33537 <nil> <nil>}
	I0617 11:36:21.373335  691879 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-134601 && echo "addons-134601" | sudo tee /etc/hostname
	I0617 11:36:21.517641  691879 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-134601
	
	I0617 11:36:21.517812  691879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-134601
	I0617 11:36:21.537281  691879 main.go:141] libmachine: Using SSH client type: native
	I0617 11:36:21.537514  691879 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2bb0] 0x3e5410 <nil>  [] 0s} 127.0.0.1 33537 <nil> <nil>}
	I0617 11:36:21.537530  691879 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-134601' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-134601/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-134601' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0617 11:36:21.671535  691879 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0617 11:36:21.671566  691879 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19084-685849/.minikube CaCertPath:/home/jenkins/minikube-integration/19084-685849/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19084-685849/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19084-685849/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19084-685849/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19084-685849/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19084-685849/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19084-685849/.minikube}
	I0617 11:36:21.671597  691879 ubuntu.go:177] setting up certificates
	I0617 11:36:21.671607  691879 provision.go:84] configureAuth start
	I0617 11:36:21.671672  691879 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-134601
	I0617 11:36:21.687707  691879 provision.go:143] copyHostCerts
	I0617 11:36:21.687803  691879 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-685849/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19084-685849/.minikube/key.pem (1679 bytes)
	I0617 11:36:21.687922  691879 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-685849/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19084-685849/.minikube/ca.pem (1078 bytes)
	I0617 11:36:21.687989  691879 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-685849/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19084-685849/.minikube/cert.pem (1123 bytes)
	I0617 11:36:21.688066  691879 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19084-685849/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19084-685849/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19084-685849/.minikube/certs/ca-key.pem org=jenkins.addons-134601 san=[127.0.0.1 192.168.49.2 addons-134601 localhost minikube]
	I0617 11:36:22.826781  691879 provision.go:177] copyRemoteCerts
	I0617 11:36:22.826872  691879 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0617 11:36:22.826915  691879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-134601
	I0617 11:36:22.844666  691879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/19084-685849/.minikube/machines/addons-134601/id_rsa Username:docker}
	I0617 11:36:22.936111  691879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-685849/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0617 11:36:22.959704  691879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-685849/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0617 11:36:22.983379  691879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-685849/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0617 11:36:23.008848  691879 provision.go:87] duration metric: took 1.337222833s to configureAuth
	I0617 11:36:23.008876  691879 ubuntu.go:193] setting minikube options for container-runtime
	I0617 11:36:23.009071  691879 config.go:182] Loaded profile config "addons-134601": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.1
	I0617 11:36:23.009084  691879 machine.go:97] duration metric: took 1.83106832s to provisionDockerMachine
	I0617 11:36:23.009092  691879 client.go:171] duration metric: took 10.418865321s to LocalClient.Create
	I0617 11:36:23.009112  691879 start.go:167] duration metric: took 10.418930772s to libmachine.API.Create "addons-134601"
	I0617 11:36:23.009122  691879 start.go:293] postStartSetup for "addons-134601" (driver="docker")
	I0617 11:36:23.009132  691879 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0617 11:36:23.009195  691879 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0617 11:36:23.009253  691879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-134601
	I0617 11:36:23.025337  691879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/19084-685849/.minikube/machines/addons-134601/id_rsa Username:docker}
	I0617 11:36:23.116380  691879 ssh_runner.go:195] Run: cat /etc/os-release
	I0617 11:36:23.119505  691879 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0617 11:36:23.119545  691879 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0617 11:36:23.119581  691879 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0617 11:36:23.119590  691879 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0617 11:36:23.119601  691879 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-685849/.minikube/addons for local assets ...
	I0617 11:36:23.119688  691879 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-685849/.minikube/files for local assets ...
	I0617 11:36:23.119716  691879 start.go:296] duration metric: took 110.586908ms for postStartSetup
	I0617 11:36:23.120037  691879 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-134601
	I0617 11:36:23.135166  691879 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/addons-134601/config.json ...
	I0617 11:36:23.135508  691879 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0617 11:36:23.135580  691879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-134601
	I0617 11:36:23.151365  691879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/19084-685849/.minikube/machines/addons-134601/id_rsa Username:docker}
	I0617 11:36:23.240613  691879 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0617 11:36:23.245009  691879 start.go:128] duration metric: took 10.657462021s to createHost
	I0617 11:36:23.245033  691879 start.go:83] releasing machines lock for "addons-134601", held for 10.657662484s
	I0617 11:36:23.245103  691879 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-134601
	I0617 11:36:23.263579  691879 ssh_runner.go:195] Run: cat /version.json
	I0617 11:36:23.263629  691879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-134601
	I0617 11:36:23.263695  691879 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0617 11:36:23.263743  691879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-134601
	I0617 11:36:23.284903  691879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/19084-685849/.minikube/machines/addons-134601/id_rsa Username:docker}
	I0617 11:36:23.287610  691879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/19084-685849/.minikube/machines/addons-134601/id_rsa Username:docker}
	I0617 11:36:23.370750  691879 ssh_runner.go:195] Run: systemctl --version
	I0617 11:36:23.494078  691879 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0617 11:36:23.498212  691879 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0617 11:36:23.522009  691879 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0617 11:36:23.522126  691879 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0617 11:36:23.549243  691879 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0617 11:36:23.549277  691879 start.go:494] detecting cgroup driver to use...
	I0617 11:36:23.549327  691879 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0617 11:36:23.549403  691879 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0617 11:36:23.562005  691879 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0617 11:36:23.573234  691879 docker.go:217] disabling cri-docker service (if available) ...
	I0617 11:36:23.573351  691879 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0617 11:36:23.587198  691879 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0617 11:36:23.601680  691879 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0617 11:36:23.683967  691879 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0617 11:36:23.771755  691879 docker.go:233] disabling docker service ...
	I0617 11:36:23.771824  691879 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0617 11:36:23.792468  691879 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0617 11:36:23.804806  691879 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0617 11:36:23.895367  691879 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0617 11:36:23.997727  691879 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0617 11:36:24.011850  691879 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0617 11:36:24.032872  691879 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0617 11:36:24.045213  691879 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0617 11:36:24.056856  691879 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0617 11:36:24.056933  691879 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0617 11:36:24.068842  691879 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0617 11:36:24.080200  691879 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0617 11:36:24.092330  691879 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0617 11:36:24.102858  691879 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0617 11:36:24.113320  691879 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0617 11:36:24.125495  691879 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0617 11:36:24.136219  691879 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0617 11:36:24.146430  691879 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0617 11:36:24.155889  691879 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0617 11:36:24.164764  691879 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 11:36:24.249627  691879 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0617 11:36:24.374905  691879 start.go:541] Will wait 60s for socket path /run/containerd/containerd.sock
	I0617 11:36:24.374993  691879 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0617 11:36:24.378596  691879 start.go:562] Will wait 60s for crictl version
	I0617 11:36:24.378700  691879 ssh_runner.go:195] Run: which crictl
	I0617 11:36:24.381951  691879 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0617 11:36:24.425400  691879 start.go:578] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.33
	RuntimeApiVersion:  v1
	I0617 11:36:24.425539  691879 ssh_runner.go:195] Run: containerd --version
	I0617 11:36:24.447065  691879 ssh_runner.go:195] Run: containerd --version
	I0617 11:36:24.471612  691879 out.go:177] * Preparing Kubernetes v1.30.1 on containerd 1.6.33 ...
	I0617 11:36:24.473460  691879 cli_runner.go:164] Run: docker network inspect addons-134601 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0617 11:36:24.490521  691879 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0617 11:36:24.494144  691879 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 11:36:24.505074  691879 kubeadm.go:877] updating cluster {Name:addons-134601 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:addons-134601 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0617 11:36:24.505196  691879 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime containerd
	I0617 11:36:24.505262  691879 ssh_runner.go:195] Run: sudo crictl images --output json
	I0617 11:36:24.540893  691879 containerd.go:627] all images are preloaded for containerd runtime.
	I0617 11:36:24.540917  691879 containerd.go:534] Images already preloaded, skipping extraction
	I0617 11:36:24.540980  691879 ssh_runner.go:195] Run: sudo crictl images --output json
	I0617 11:36:24.578255  691879 containerd.go:627] all images are preloaded for containerd runtime.
	I0617 11:36:24.578275  691879 cache_images.go:84] Images are preloaded, skipping loading
	I0617 11:36:24.578282  691879 kubeadm.go:928] updating node { 192.168.49.2 8443 v1.30.1 containerd true true} ...
	I0617 11:36:24.578373  691879 kubeadm.go:940] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-134601 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:addons-134601 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0617 11:36:24.578434  691879 ssh_runner.go:195] Run: sudo crictl info
	I0617 11:36:24.616885  691879 cni.go:84] Creating CNI manager for ""
	I0617 11:36:24.616908  691879 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0617 11:36:24.616918  691879 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0617 11:36:24.616940  691879 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-134601 NodeName:addons-134601 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0617 11:36:24.617071  691879 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-134601"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0617 11:36:24.617144  691879 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0617 11:36:24.625700  691879 binaries.go:44] Found k8s binaries, skipping transfer
	I0617 11:36:24.625770  691879 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0617 11:36:24.633970  691879 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0617 11:36:24.651383  691879 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0617 11:36:24.669081  691879 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I0617 11:36:24.686945  691879 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0617 11:36:24.690264  691879 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 11:36:24.700746  691879 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 11:36:24.794190  691879 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 11:36:24.809633  691879 certs.go:68] Setting up /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/addons-134601 for IP: 192.168.49.2
	I0617 11:36:24.809652  691879 certs.go:194] generating shared ca certs ...
	I0617 11:36:24.809668  691879 certs.go:226] acquiring lock for ca certs: {Name:mkd182a8d082c6d0615c99aed3d4d2e0a9bb102c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 11:36:24.809799  691879 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19084-685849/.minikube/ca.key
	I0617 11:36:25.430101  691879 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19084-685849/.minikube/ca.crt ...
	I0617 11:36:25.430133  691879 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-685849/.minikube/ca.crt: {Name:mk79daf2140a9ebb346032ad8180e82fd0c9bae1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 11:36:25.430329  691879 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19084-685849/.minikube/ca.key ...
	I0617 11:36:25.430342  691879 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-685849/.minikube/ca.key: {Name:mka740ce75a647c0cf0eb6706d7fda02adc3099f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 11:36:25.430439  691879 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19084-685849/.minikube/proxy-client-ca.key
	I0617 11:36:25.823743  691879 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19084-685849/.minikube/proxy-client-ca.crt ...
	I0617 11:36:25.823778  691879 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-685849/.minikube/proxy-client-ca.crt: {Name:mk95474792d01340dba5cc7fe955e2cd54718da2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 11:36:25.824477  691879 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19084-685849/.minikube/proxy-client-ca.key ...
	I0617 11:36:25.824495  691879 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-685849/.minikube/proxy-client-ca.key: {Name:mk8472fd58e828c410e3a46367411ca5816b8527 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 11:36:25.824677  691879 certs.go:256] generating profile certs ...
	I0617 11:36:25.824752  691879 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/addons-134601/client.key
	I0617 11:36:25.824770  691879 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/addons-134601/client.crt with IP's: []
	I0617 11:36:26.529958  691879 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/addons-134601/client.crt ...
	I0617 11:36:26.529993  691879 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/addons-134601/client.crt: {Name:mkc844e9367d9a17363f31eb1cb61e5983015611 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 11:36:26.530231  691879 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/addons-134601/client.key ...
	I0617 11:36:26.530246  691879 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/addons-134601/client.key: {Name:mk6dfd46435f0ed78e07ca6d50b69de5d410d3d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 11:36:26.530343  691879 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/addons-134601/apiserver.key.1f377a19
	I0617 11:36:26.530362  691879 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/addons-134601/apiserver.crt.1f377a19 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0617 11:36:26.815156  691879 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/addons-134601/apiserver.crt.1f377a19 ...
	I0617 11:36:26.815187  691879 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/addons-134601/apiserver.crt.1f377a19: {Name:mk32703b2dafc2b81a859e4fb4b3164a3f15b1ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 11:36:26.815367  691879 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/addons-134601/apiserver.key.1f377a19 ...
	I0617 11:36:26.815382  691879 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/addons-134601/apiserver.key.1f377a19: {Name:mk32df492615f149ef55cadad21499968c388de2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 11:36:26.815489  691879 certs.go:381] copying /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/addons-134601/apiserver.crt.1f377a19 -> /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/addons-134601/apiserver.crt
	I0617 11:36:26.815574  691879 certs.go:385] copying /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/addons-134601/apiserver.key.1f377a19 -> /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/addons-134601/apiserver.key
	I0617 11:36:26.815628  691879 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/addons-134601/proxy-client.key
	I0617 11:36:26.815648  691879 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/addons-134601/proxy-client.crt with IP's: []
	I0617 11:36:27.049684  691879 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/addons-134601/proxy-client.crt ...
	I0617 11:36:27.049724  691879 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/addons-134601/proxy-client.crt: {Name:mkca9b1d6c742d596941f17d555b795b0bf66f9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 11:36:27.049934  691879 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/addons-134601/proxy-client.key ...
	I0617 11:36:27.049952  691879 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/addons-134601/proxy-client.key: {Name:mk8912b6ff549d26c2625eeea60b702619a66016 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 11:36:27.050152  691879 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-685849/.minikube/certs/ca-key.pem (1679 bytes)
	I0617 11:36:27.050201  691879 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-685849/.minikube/certs/ca.pem (1078 bytes)
	I0617 11:36:27.050233  691879 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-685849/.minikube/certs/cert.pem (1123 bytes)
	I0617 11:36:27.050262  691879 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-685849/.minikube/certs/key.pem (1679 bytes)
	I0617 11:36:27.050865  691879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-685849/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0617 11:36:27.077592  691879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-685849/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0617 11:36:27.104014  691879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-685849/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0617 11:36:27.133588  691879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-685849/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0617 11:36:27.162301  691879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/addons-134601/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0617 11:36:27.189363  691879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/addons-134601/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0617 11:36:27.215174  691879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/addons-134601/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0617 11:36:27.239775  691879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/addons-134601/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0617 11:36:27.264273  691879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-685849/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0617 11:36:27.288462  691879 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0617 11:36:27.305806  691879 ssh_runner.go:195] Run: openssl version
	I0617 11:36:27.311149  691879 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0617 11:36:27.320636  691879 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0617 11:36:27.323961  691879 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 17 11:36 /usr/share/ca-certificates/minikubeCA.pem
	I0617 11:36:27.324021  691879 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0617 11:36:27.330665  691879 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0617 11:36:27.340231  691879 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0617 11:36:27.343380  691879 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0617 11:36:27.343443  691879 kubeadm.go:391] StartCluster: {Name:addons-134601 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:addons-134601 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 11:36:27.343526  691879 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0617 11:36:27.343584  691879 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0617 11:36:27.380764  691879 cri.go:89] found id: ""
	I0617 11:36:27.380836  691879 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0617 11:36:27.389537  691879 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0617 11:36:27.398169  691879 kubeadm.go:213] ignoring SystemVerification for kubeadm because of docker driver
	I0617 11:36:27.398232  691879 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0617 11:36:27.406806  691879 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0617 11:36:27.406829  691879 kubeadm.go:156] found existing configuration files:
	
	I0617 11:36:27.406899  691879 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0617 11:36:27.415599  691879 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0617 11:36:27.415660  691879 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0617 11:36:27.424049  691879 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0617 11:36:27.432797  691879 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0617 11:36:27.432871  691879 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0617 11:36:27.441104  691879 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0617 11:36:27.449676  691879 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0617 11:36:27.449737  691879 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0617 11:36:27.457895  691879 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0617 11:36:27.466488  691879 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0617 11:36:27.466555  691879 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0617 11:36:27.474979  691879 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0617 11:36:27.559011  691879 kubeadm.go:309] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1063-aws\n", err: exit status 1
	I0617 11:36:27.628718  691879 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0617 11:36:46.483256  691879 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0617 11:36:46.483315  691879 kubeadm.go:309] [preflight] Running pre-flight checks
	I0617 11:36:46.483399  691879 kubeadm.go:309] [preflight] The system verification failed. Printing the output from the verification:
	I0617 11:36:46.483476  691879 kubeadm.go:309] KERNEL_VERSION: 5.15.0-1063-aws
	I0617 11:36:46.483512  691879 kubeadm.go:309] OS: Linux
	I0617 11:36:46.483565  691879 kubeadm.go:309] CGROUPS_CPU: enabled
	I0617 11:36:46.483616  691879 kubeadm.go:309] CGROUPS_CPUACCT: enabled
	I0617 11:36:46.483665  691879 kubeadm.go:309] CGROUPS_CPUSET: enabled
	I0617 11:36:46.483716  691879 kubeadm.go:309] CGROUPS_DEVICES: enabled
	I0617 11:36:46.483764  691879 kubeadm.go:309] CGROUPS_FREEZER: enabled
	I0617 11:36:46.483816  691879 kubeadm.go:309] CGROUPS_MEMORY: enabled
	I0617 11:36:46.483863  691879 kubeadm.go:309] CGROUPS_PIDS: enabled
	I0617 11:36:46.483913  691879 kubeadm.go:309] CGROUPS_HUGETLB: enabled
	I0617 11:36:46.483960  691879 kubeadm.go:309] CGROUPS_BLKIO: enabled
	I0617 11:36:46.484033  691879 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0617 11:36:46.484127  691879 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0617 11:36:46.484220  691879 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0617 11:36:46.484284  691879 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0617 11:36:46.486249  691879 out.go:204]   - Generating certificates and keys ...
	I0617 11:36:46.486338  691879 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0617 11:36:46.486406  691879 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0617 11:36:46.486474  691879 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0617 11:36:46.486534  691879 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0617 11:36:46.486597  691879 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0617 11:36:46.486649  691879 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0617 11:36:46.486707  691879 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0617 11:36:46.486823  691879 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-134601 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0617 11:36:46.486883  691879 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0617 11:36:46.487000  691879 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-134601 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0617 11:36:46.487068  691879 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0617 11:36:46.487134  691879 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0617 11:36:46.487181  691879 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0617 11:36:46.487240  691879 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0617 11:36:46.487292  691879 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0617 11:36:46.487350  691879 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0617 11:36:46.487404  691879 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0617 11:36:46.487513  691879 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0617 11:36:46.487582  691879 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0617 11:36:46.487680  691879 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0617 11:36:46.487763  691879 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0617 11:36:46.489639  691879 out.go:204]   - Booting up control plane ...
	I0617 11:36:46.489747  691879 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0617 11:36:46.489849  691879 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0617 11:36:46.489930  691879 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0617 11:36:46.490038  691879 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0617 11:36:46.490133  691879 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0617 11:36:46.490179  691879 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0617 11:36:46.490332  691879 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0617 11:36:46.490415  691879 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0617 11:36:46.490472  691879 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 2.501929769s
	I0617 11:36:46.490539  691879 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0617 11:36:46.490595  691879 kubeadm.go:309] [api-check] The API server is healthy after 6.00186563s
	I0617 11:36:46.490696  691879 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0617 11:36:46.490815  691879 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0617 11:36:46.490879  691879 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0617 11:36:46.491054  691879 kubeadm.go:309] [mark-control-plane] Marking the node addons-134601 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0617 11:36:46.491108  691879 kubeadm.go:309] [bootstrap-token] Using token: yrsv16.mt35ve1y9ihhwy28
	I0617 11:36:46.493000  691879 out.go:204]   - Configuring RBAC rules ...
	I0617 11:36:46.493117  691879 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0617 11:36:46.493208  691879 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0617 11:36:46.493348  691879 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0617 11:36:46.493490  691879 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0617 11:36:46.493605  691879 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0617 11:36:46.493690  691879 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0617 11:36:46.493812  691879 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0617 11:36:46.493859  691879 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0617 11:36:46.493903  691879 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0617 11:36:46.493907  691879 kubeadm.go:309] 
	I0617 11:36:46.493965  691879 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0617 11:36:46.493968  691879 kubeadm.go:309] 
	I0617 11:36:46.494044  691879 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0617 11:36:46.494048  691879 kubeadm.go:309] 
	I0617 11:36:46.494073  691879 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0617 11:36:46.494129  691879 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0617 11:36:46.494178  691879 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0617 11:36:46.494181  691879 kubeadm.go:309] 
	I0617 11:36:46.494234  691879 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0617 11:36:46.494237  691879 kubeadm.go:309] 
	I0617 11:36:46.494292  691879 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0617 11:36:46.494296  691879 kubeadm.go:309] 
	I0617 11:36:46.494348  691879 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0617 11:36:46.494420  691879 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0617 11:36:46.494485  691879 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0617 11:36:46.494490  691879 kubeadm.go:309] 
	I0617 11:36:46.494571  691879 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0617 11:36:46.494654  691879 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0617 11:36:46.494668  691879 kubeadm.go:309] 
	I0617 11:36:46.494751  691879 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token yrsv16.mt35ve1y9ihhwy28 \
	I0617 11:36:46.494857  691879 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:dc4b907f606e2c80144c7b9bd3e930cd226e10982953b09171123a8759c70db4 \
	I0617 11:36:46.494877  691879 kubeadm.go:309] 	--control-plane 
	I0617 11:36:46.494881  691879 kubeadm.go:309] 
	I0617 11:36:46.494963  691879 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0617 11:36:46.494967  691879 kubeadm.go:309] 
	I0617 11:36:46.495047  691879 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token yrsv16.mt35ve1y9ihhwy28 \
	I0617 11:36:46.495183  691879 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:dc4b907f606e2c80144c7b9bd3e930cd226e10982953b09171123a8759c70db4 
	I0617 11:36:46.495192  691879 cni.go:84] Creating CNI manager for ""
	I0617 11:36:46.495199  691879 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0617 11:36:46.497041  691879 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0617 11:36:46.498690  691879 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0617 11:36:46.502688  691879 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0617 11:36:46.502709  691879 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0617 11:36:46.521138  691879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0617 11:36:46.797205  691879 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0617 11:36:46.797343  691879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 11:36:46.797425  691879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-134601 minikube.k8s.io/updated_at=2024_06_17T11_36_46_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e6835d8632d8e28da57a827eb12d7b852b17a9f6 minikube.k8s.io/name=addons-134601 minikube.k8s.io/primary=true
	I0617 11:36:46.956611  691879 ops.go:34] apiserver oom_adj: -16
	I0617 11:36:46.956719  691879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 11:36:47.457860  691879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 11:36:47.957299  691879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 11:36:48.457577  691879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 11:36:48.956777  691879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 11:36:49.457572  691879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 11:36:49.957357  691879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 11:36:50.456873  691879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 11:36:50.957105  691879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 11:36:51.456870  691879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 11:36:51.957288  691879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 11:36:52.457173  691879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 11:36:52.957583  691879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 11:36:53.457863  691879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 11:36:53.957478  691879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 11:36:54.457385  691879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 11:36:54.957589  691879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 11:36:55.457698  691879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 11:36:55.956846  691879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 11:36:56.457516  691879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 11:36:56.957081  691879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 11:36:57.457739  691879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 11:36:57.957106  691879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 11:36:58.457678  691879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 11:36:58.957125  691879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 11:36:59.457819  691879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0617 11:36:59.560899  691879 kubeadm.go:1107] duration metric: took 12.763602634s to wait for elevateKubeSystemPrivileges
	W0617 11:36:59.560931  691879 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0617 11:36:59.560938  691879 kubeadm.go:393] duration metric: took 32.217500526s to StartCluster
	I0617 11:36:59.560953  691879 settings.go:142] acquiring lock: {Name:mk2a85dcb9c00537cffe742aea475ca7d2cf09a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 11:36:59.561058  691879 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19084-685849/kubeconfig
	I0617 11:36:59.561443  691879 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-685849/kubeconfig: {Name:mk0f1db8295cd0d3b8a0428491dac563579b7b2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 11:36:59.561625  691879 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0617 11:36:59.563506  691879 out.go:177] * Verifying Kubernetes components...
	I0617 11:36:59.561758  691879 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0617 11:36:59.561940  691879 config.go:182] Loaded profile config "addons-134601": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.1
	I0617 11:36:59.561950  691879 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0617 11:36:59.565153  691879 addons.go:69] Setting yakd=true in profile "addons-134601"
	I0617 11:36:59.565181  691879 addons.go:234] Setting addon yakd=true in "addons-134601"
	I0617 11:36:59.565215  691879 host.go:66] Checking if "addons-134601" exists ...
	I0617 11:36:59.565683  691879 cli_runner.go:164] Run: docker container inspect addons-134601 --format={{.State.Status}}
	I0617 11:36:59.565772  691879 addons.go:69] Setting ingress-dns=true in profile "addons-134601"
	I0617 11:36:59.565794  691879 addons.go:234] Setting addon ingress-dns=true in "addons-134601"
	I0617 11:36:59.565820  691879 host.go:66] Checking if "addons-134601" exists ...
	I0617 11:36:59.566190  691879 cli_runner.go:164] Run: docker container inspect addons-134601 --format={{.State.Status}}
	I0617 11:36:59.566631  691879 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 11:36:59.566786  691879 addons.go:69] Setting cloud-spanner=true in profile "addons-134601"
	I0617 11:36:59.566808  691879 addons.go:234] Setting addon cloud-spanner=true in "addons-134601"
	I0617 11:36:59.566828  691879 host.go:66] Checking if "addons-134601" exists ...
	I0617 11:36:59.567182  691879 cli_runner.go:164] Run: docker container inspect addons-134601 --format={{.State.Status}}
	I0617 11:36:59.567800  691879 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-134601"
	I0617 11:36:59.567846  691879 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-134601"
	I0617 11:36:59.567876  691879 host.go:66] Checking if "addons-134601" exists ...
	I0617 11:36:59.568228  691879 cli_runner.go:164] Run: docker container inspect addons-134601 --format={{.State.Status}}
	I0617 11:36:59.569687  691879 addons.go:69] Setting inspektor-gadget=true in profile "addons-134601"
	I0617 11:36:59.569715  691879 addons.go:234] Setting addon inspektor-gadget=true in "addons-134601"
	I0617 11:36:59.569739  691879 host.go:66] Checking if "addons-134601" exists ...
	I0617 11:36:59.570115  691879 cli_runner.go:164] Run: docker container inspect addons-134601 --format={{.State.Status}}
	I0617 11:36:59.575800  691879 addons.go:69] Setting metrics-server=true in profile "addons-134601"
	I0617 11:36:59.575845  691879 addons.go:234] Setting addon metrics-server=true in "addons-134601"
	I0617 11:36:59.575889  691879 host.go:66] Checking if "addons-134601" exists ...
	I0617 11:36:59.576297  691879 cli_runner.go:164] Run: docker container inspect addons-134601 --format={{.State.Status}}
	I0617 11:36:59.580052  691879 addons.go:69] Setting default-storageclass=true in profile "addons-134601"
	I0617 11:36:59.603538  691879 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-134601"
	I0617 11:36:59.603916  691879 cli_runner.go:164] Run: docker container inspect addons-134601 --format={{.State.Status}}
	I0617 11:36:59.580212  691879 addons.go:69] Setting gcp-auth=true in profile "addons-134601"
	I0617 11:36:59.606897  691879 mustload.go:65] Loading cluster: addons-134601
	I0617 11:36:59.610539  691879 config.go:182] Loaded profile config "addons-134601": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.1
	I0617 11:36:59.610946  691879 cli_runner.go:164] Run: docker container inspect addons-134601 --format={{.State.Status}}
	I0617 11:36:59.580223  691879 addons.go:69] Setting ingress=true in profile "addons-134601"
	I0617 11:36:59.626931  691879 addons.go:234] Setting addon ingress=true in "addons-134601"
	I0617 11:36:59.627010  691879 host.go:66] Checking if "addons-134601" exists ...
	I0617 11:36:59.585684  691879 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-134601"
	I0617 11:36:59.640766  691879 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-134601"
	I0617 11:36:59.640835  691879 host.go:66] Checking if "addons-134601" exists ...
	I0617 11:36:59.641298  691879 cli_runner.go:164] Run: docker container inspect addons-134601 --format={{.State.Status}}
	I0617 11:36:59.642802  691879 cli_runner.go:164] Run: docker container inspect addons-134601 --format={{.State.Status}}
	I0617 11:36:59.585696  691879 addons.go:69] Setting registry=true in profile "addons-134601"
	I0617 11:36:59.649768  691879 addons.go:234] Setting addon registry=true in "addons-134601"
	I0617 11:36:59.649836  691879 host.go:66] Checking if "addons-134601" exists ...
	I0617 11:36:59.650365  691879 cli_runner.go:164] Run: docker container inspect addons-134601 --format={{.State.Status}}
	I0617 11:36:59.585706  691879 addons.go:69] Setting storage-provisioner=true in profile "addons-134601"
	I0617 11:36:59.671592  691879 addons.go:234] Setting addon storage-provisioner=true in "addons-134601"
	I0617 11:36:59.671668  691879 host.go:66] Checking if "addons-134601" exists ...
	I0617 11:36:59.672150  691879 cli_runner.go:164] Run: docker container inspect addons-134601 --format={{.State.Status}}
	I0617 11:36:59.585710  691879 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-134601"
	I0617 11:36:59.585712  691879 addons.go:69] Setting volcano=true in profile "addons-134601"
	I0617 11:36:59.585716  691879 addons.go:69] Setting volumesnapshots=true in profile "addons-134601"
	I0617 11:36:59.687093  691879 addons.go:234] Setting addon volumesnapshots=true in "addons-134601"
	I0617 11:36:59.687140  691879 host.go:66] Checking if "addons-134601" exists ...
	I0617 11:36:59.687734  691879 cli_runner.go:164] Run: docker container inspect addons-134601 --format={{.State.Status}}
	I0617 11:36:59.729855  691879 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0617 11:36:59.734812  691879 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0617 11:36:59.734905  691879 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0617 11:36:59.735009  691879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-134601
	I0617 11:36:59.729505  691879 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0617 11:36:59.729535  691879 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-134601"
	I0617 11:36:59.729563  691879 addons.go:234] Setting addon volcano=true in "addons-134601"
	I0617 11:36:59.749469  691879 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0617 11:36:59.749898  691879 cli_runner.go:164] Run: docker container inspect addons-134601 --format={{.State.Status}}
	I0617 11:36:59.749942  691879 host.go:66] Checking if "addons-134601" exists ...
	I0617 11:36:59.753802  691879 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.29.0
	I0617 11:36:59.753809  691879 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0617 11:36:59.767647  691879 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0617 11:36:59.767712  691879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0617 11:36:59.767814  691879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-134601
	I0617 11:36:59.776177  691879 cli_runner.go:164] Run: docker container inspect addons-134601 --format={{.State.Status}}
	I0617 11:36:59.755532  691879 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0617 11:36:59.787593  691879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0617 11:36:59.787682  691879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-134601
	I0617 11:36:59.806063  691879 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0617 11:36:59.806083  691879 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0617 11:36:59.806147  691879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-134601
	I0617 11:36:59.808058  691879 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0617 11:36:59.831979  691879 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0617 11:36:59.804669  691879 addons.go:234] Setting addon default-storageclass=true in "addons-134601"
	I0617 11:36:59.804955  691879 host.go:66] Checking if "addons-134601" exists ...
	I0617 11:36:59.844682  691879 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	I0617 11:36:59.846826  691879 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0617 11:36:59.846861  691879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0617 11:36:59.846927  691879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-134601
	I0617 11:36:59.869099  691879 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0617 11:36:59.839681  691879 host.go:66] Checking if "addons-134601" exists ...
	I0617 11:36:59.839692  691879 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0617 11:36:59.884967  691879 cli_runner.go:164] Run: docker container inspect addons-134601 --format={{.State.Status}}
	I0617 11:36:59.893177  691879 out.go:177]   - Using image docker.io/registry:2.8.3
	I0617 11:36:59.893186  691879 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0617 11:36:59.893191  691879 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 11:36:59.893194  691879 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0617 11:36:59.899806  691879 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0617 11:36:59.897891  691879 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0617 11:36:59.897902  691879 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.7.0
	I0617 11:36:59.902511  691879 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0617 11:36:59.906099  691879 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0617 11:36:59.906108  691879 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0617 11:36:59.906257  691879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-134601
	I0617 11:36:59.910595  691879 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	I0617 11:36:59.913344  691879 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0617 11:36:59.910644  691879 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0617 11:36:59.910760  691879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0617 11:36:59.915284  691879 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0617 11:36:59.915293  691879 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.7.0
	I0617 11:36:59.920068  691879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-134601
	I0617 11:36:59.925082  691879 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0617 11:36:59.925106  691879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0617 11:36:59.933103  691879 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.7.0
	I0617 11:36:59.931805  691879 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0617 11:36:59.931815  691879 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0617 11:36:59.932417  691879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-134601
	I0617 11:36:59.980521  691879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0617 11:36:59.980673  691879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-134601
	I0617 11:36:59.990079  691879 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0617 11:36:59.990102  691879 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0617 11:36:59.990160  691879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-134601
	I0617 11:36:59.997119  691879 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0617 11:36:59.997144  691879 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0617 11:36:59.997214  691879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-134601
	I0617 11:37:00.030374  691879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/19084-685849/.minikube/machines/addons-134601/id_rsa Username:docker}
	I0617 11:37:00.031872  691879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/19084-685849/.minikube/machines/addons-134601/id_rsa Username:docker}
	I0617 11:37:00.043864  691879 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0617 11:37:00.043936  691879 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0617 11:37:00.044034  691879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-134601
	I0617 11:37:00.058740  691879 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-134601"
	I0617 11:37:00.058798  691879 host.go:66] Checking if "addons-134601" exists ...
	I0617 11:37:00.059266  691879 cli_runner.go:164] Run: docker container inspect addons-134601 --format={{.State.Status}}
	I0617 11:37:00.062828  691879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/19084-685849/.minikube/machines/addons-134601/id_rsa Username:docker}
	I0617 11:37:00.063868  691879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/19084-685849/.minikube/machines/addons-134601/id_rsa Username:docker}
	I0617 11:37:00.086649  691879 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0617 11:37:00.086737  691879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (626760 bytes)
	I0617 11:37:00.086859  691879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-134601
	I0617 11:37:00.107917  691879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/19084-685849/.minikube/machines/addons-134601/id_rsa Username:docker}
	I0617 11:37:00.108339  691879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/19084-685849/.minikube/machines/addons-134601/id_rsa Username:docker}
	I0617 11:37:00.144612  691879 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 11:37:00.145016  691879 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0617 11:37:00.173080  691879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/19084-685849/.minikube/machines/addons-134601/id_rsa Username:docker}
	I0617 11:37:00.184186  691879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/19084-685849/.minikube/machines/addons-134601/id_rsa Username:docker}
	I0617 11:37:00.185144  691879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/19084-685849/.minikube/machines/addons-134601/id_rsa Username:docker}
	I0617 11:37:00.216925  691879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/19084-685849/.minikube/machines/addons-134601/id_rsa Username:docker}
	I0617 11:37:00.242299  691879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/19084-685849/.minikube/machines/addons-134601/id_rsa Username:docker}
	W0617 11:37:00.248898  691879 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0617 11:37:00.248953  691879 retry.go:31] will retry after 316.650336ms: ssh: handshake failed: EOF
	I0617 11:37:00.262092  691879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/19084-685849/.minikube/machines/addons-134601/id_rsa Username:docker}
	I0617 11:37:00.264912  691879 out.go:177]   - Using image docker.io/busybox:stable
	I0617 11:37:00.262757  691879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/19084-685849/.minikube/machines/addons-134601/id_rsa Username:docker}
	I0617 11:37:00.269133  691879 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0617 11:37:00.271087  691879 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0617 11:37:00.271111  691879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0617 11:37:00.271194  691879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-134601
	I0617 11:37:00.309196  691879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/19084-685849/.minikube/machines/addons-134601/id_rsa Username:docker}
	W0617 11:37:00.310391  691879 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0617 11:37:00.310434  691879 retry.go:31] will retry after 320.395408ms: ssh: handshake failed: EOF
	I0617 11:37:00.415945  691879 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0617 11:37:00.418442  691879 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0617 11:37:00.418463  691879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0617 11:37:00.561630  691879 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0617 11:37:00.561657  691879 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0617 11:37:00.582065  691879 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0617 11:37:00.617776  691879 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0617 11:37:00.624420  691879 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0617 11:37:00.624448  691879 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0617 11:37:00.671752  691879 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0617 11:37:00.671780  691879 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0617 11:37:00.683640  691879 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0617 11:37:00.710134  691879 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0617 11:37:00.732224  691879 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0617 11:37:00.732252  691879 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0617 11:37:00.775556  691879 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0617 11:37:00.775585  691879 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0617 11:37:00.794605  691879 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0617 11:37:00.794677  691879 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0617 11:37:00.806299  691879 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0617 11:37:00.806335  691879 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0617 11:37:00.810687  691879 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0617 11:37:00.832329  691879 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0617 11:37:00.918554  691879 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0617 11:37:00.918620  691879 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0617 11:37:00.934343  691879 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0617 11:37:00.934373  691879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0617 11:37:01.036415  691879 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0617 11:37:01.036445  691879 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0617 11:37:01.068180  691879 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0617 11:37:01.089567  691879 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0617 11:37:01.089595  691879 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0617 11:37:01.154173  691879 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0617 11:37:01.154202  691879 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0617 11:37:01.202729  691879 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0617 11:37:01.232347  691879 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0617 11:37:01.232376  691879 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0617 11:37:01.292374  691879 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0617 11:37:01.295512  691879 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0617 11:37:01.295539  691879 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0617 11:37:01.364207  691879 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0617 11:37:01.364238  691879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0617 11:37:01.512069  691879 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0617 11:37:01.512100  691879 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0617 11:37:01.717874  691879 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0617 11:37:01.717898  691879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0617 11:37:01.757606  691879 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0617 11:37:01.757646  691879 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0617 11:37:01.758431  691879 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0617 11:37:01.758477  691879 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0617 11:37:01.820572  691879 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0617 11:37:02.055917  691879 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.910872599s)
	I0617 11:37:02.056060  691879 start.go:946] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0617 11:37:02.055999  691879 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.91136117s)
	I0617 11:37:02.057028  691879 node_ready.go:35] waiting up to 6m0s for node "addons-134601" to be "Ready" ...
	I0617 11:37:02.060556  691879 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0617 11:37:02.060597  691879 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0617 11:37:02.068121  691879 node_ready.go:49] node "addons-134601" has status "Ready":"True"
	I0617 11:37:02.068202  691879 node_ready.go:38] duration metric: took 11.143135ms for node "addons-134601" to be "Ready" ...
	I0617 11:37:02.068228  691879 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 11:37:02.079039  691879 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-dhctc" in "kube-system" namespace to be "Ready" ...
	I0617 11:37:02.099436  691879 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0617 11:37:02.099502  691879 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0617 11:37:02.162683  691879 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0617 11:37:02.391591  691879 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0617 11:37:02.391656  691879 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0617 11:37:02.413079  691879 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0617 11:37:02.413142  691879 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0617 11:37:02.561276  691879 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-134601" context rescaled to 1 replicas
	I0617 11:37:02.614155  691879 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0617 11:37:02.614229  691879 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0617 11:37:02.778712  691879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.362716108s)
	I0617 11:37:02.795652  691879 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0617 11:37:02.795725  691879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0617 11:37:02.879482  691879 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0617 11:37:02.879552  691879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0617 11:37:03.082095  691879 pod_ready.go:97] error getting pod "coredns-7db6d8ff4d-dhctc" in "kube-system" namespace (skipping!): pods "coredns-7db6d8ff4d-dhctc" not found
	I0617 11:37:03.082174  691879 pod_ready.go:81] duration metric: took 1.003055428s for pod "coredns-7db6d8ff4d-dhctc" in "kube-system" namespace to be "Ready" ...
	E0617 11:37:03.082201  691879 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-7db6d8ff4d-dhctc" in "kube-system" namespace (skipping!): pods "coredns-7db6d8ff4d-dhctc" not found
	I0617 11:37:03.082222  691879 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rcpbp" in "kube-system" namespace to be "Ready" ...
	I0617 11:37:03.471591  691879 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0617 11:37:03.471653  691879 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0617 11:37:03.574776  691879 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0617 11:37:04.027682  691879 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0617 11:37:04.027742  691879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0617 11:37:04.340618  691879 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0617 11:37:04.340692  691879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0617 11:37:04.362746  691879 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0617 11:37:04.362810  691879 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0617 11:37:04.383100  691879 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0617 11:37:05.091243  691879 pod_ready.go:102] pod "coredns-7db6d8ff4d-rcpbp" in "kube-system" namespace has status "Ready":"False"
	I0617 11:37:07.045288  691879 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0617 11:37:07.045430  691879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-134601
	I0617 11:37:07.068132  691879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/19084-685849/.minikube/machines/addons-134601/id_rsa Username:docker}
	I0617 11:37:07.626839  691879 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0617 11:37:07.670610  691879 pod_ready.go:102] pod "coredns-7db6d8ff4d-rcpbp" in "kube-system" namespace has status "Ready":"False"
	I0617 11:37:07.907031  691879 addons.go:234] Setting addon gcp-auth=true in "addons-134601"
	I0617 11:37:07.907132  691879 host.go:66] Checking if "addons-134601" exists ...
	I0617 11:37:07.907613  691879 cli_runner.go:164] Run: docker container inspect addons-134601 --format={{.State.Status}}
	I0617 11:37:07.931530  691879 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0617 11:37:07.931585  691879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-134601
	I0617 11:37:07.954389  691879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/19084-685849/.minikube/machines/addons-134601/id_rsa Username:docker}
	I0617 11:37:08.840204  691879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.258099102s)
	I0617 11:37:08.840503  691879 addons.go:475] Verifying addon ingress=true in "addons-134601"
	I0617 11:37:08.842515  691879 out.go:177] * Verifying ingress addon...
	I0617 11:37:08.840626  691879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.130272848s)
	I0617 11:37:08.840390  691879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.156728261s)
	I0617 11:37:08.840455  691879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.029744752s)
	I0617 11:37:08.840335  691879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.222528659s)
	I0617 11:37:08.845374  691879 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0617 11:37:08.850915  691879 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0617 11:37:08.850979  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:09.356556  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:09.856044  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:10.105780  691879 pod_ready.go:102] pod "coredns-7db6d8ff4d-rcpbp" in "kube-system" namespace has status "Ready":"False"
	I0617 11:37:10.401027  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:10.587631  691879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (9.755264506s)
	I0617 11:37:10.587780  691879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (9.51954976s)
	I0617 11:37:10.587811  691879 addons.go:475] Verifying addon metrics-server=true in "addons-134601"
	I0617 11:37:10.587872  691879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (9.385114246s)
	I0617 11:37:10.587901  691879 addons.go:475] Verifying addon registry=true in "addons-134601"
	I0617 11:37:10.591068  691879 out.go:177] * Verifying registry addon...
	I0617 11:37:10.588115  691879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (9.295713917s)
	I0617 11:37:10.588152  691879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.767551689s)
	I0617 11:37:10.588304  691879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (8.425557087s)
	I0617 11:37:10.588388  691879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.013544798s)
	I0617 11:37:10.594413  691879 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0617 11:37:10.596559  691879 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-134601 service yakd-dashboard -n yakd-dashboard
	
	W0617 11:37:10.591588  691879 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0617 11:37:10.598875  691879 retry.go:31] will retry after 213.957487ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0617 11:37:10.602511  691879 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0617 11:37:10.602580  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 11:37:10.813197  691879 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0617 11:37:10.872074  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:11.079854  691879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.696651858s)
	I0617 11:37:11.079946  691879 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-134601"
	I0617 11:37:11.082677  691879 out.go:177] * Verifying csi-hostpath-driver addon...
	I0617 11:37:11.080220  691879 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.148666504s)
	I0617 11:37:11.088005  691879 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0617 11:37:11.086604  691879 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0617 11:37:11.092192  691879 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0617 11:37:11.093834  691879 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0617 11:37:11.093903  691879 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0617 11:37:11.115157  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 11:37:11.116918  691879 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0617 11:37:11.116986  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:11.150651  691879 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0617 11:37:11.150724  691879 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0617 11:37:11.205859  691879 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0617 11:37:11.205931  691879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0617 11:37:11.291754  691879 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0617 11:37:11.350630  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:11.597317  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:11.600946  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 11:37:11.850087  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:12.096420  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:12.099555  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 11:37:12.358408  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:12.393501  691879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.580255805s)
	I0617 11:37:12.393655  691879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.101814947s)
	I0617 11:37:12.396865  691879 addons.go:475] Verifying addon gcp-auth=true in "addons-134601"
	I0617 11:37:12.401390  691879 out.go:177] * Verifying gcp-auth addon...
	I0617 11:37:12.404267  691879 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0617 11:37:12.406828  691879 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0617 11:37:12.589492  691879 pod_ready.go:102] pod "coredns-7db6d8ff4d-rcpbp" in "kube-system" namespace has status "Ready":"False"
	I0617 11:37:12.596612  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:12.601184  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 11:37:12.849616  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:13.096455  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:13.101010  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 11:37:13.352614  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:13.595979  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:13.600229  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 11:37:13.850227  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:14.101629  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:14.102508  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 11:37:14.355740  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:14.595574  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:14.603756  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 11:37:14.850315  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:15.090571  691879 pod_ready.go:102] pod "coredns-7db6d8ff4d-rcpbp" in "kube-system" namespace has status "Ready":"False"
	I0617 11:37:15.100391  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:15.101967  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 11:37:15.352294  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:15.589278  691879 pod_ready.go:92] pod "coredns-7db6d8ff4d-rcpbp" in "kube-system" namespace has status "Ready":"True"
	I0617 11:37:15.589352  691879 pod_ready.go:81] duration metric: took 12.507099007s for pod "coredns-7db6d8ff4d-rcpbp" in "kube-system" namespace to be "Ready" ...
	I0617 11:37:15.589378  691879 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-134601" in "kube-system" namespace to be "Ready" ...
	I0617 11:37:15.595878  691879 pod_ready.go:92] pod "etcd-addons-134601" in "kube-system" namespace has status "Ready":"True"
	I0617 11:37:15.595953  691879 pod_ready.go:81] duration metric: took 6.553654ms for pod "etcd-addons-134601" in "kube-system" namespace to be "Ready" ...
	I0617 11:37:15.595983  691879 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-134601" in "kube-system" namespace to be "Ready" ...
	I0617 11:37:15.602375  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:15.603911  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 11:37:15.605192  691879 pod_ready.go:92] pod "kube-apiserver-addons-134601" in "kube-system" namespace has status "Ready":"True"
	I0617 11:37:15.605273  691879 pod_ready.go:81] duration metric: took 9.268668ms for pod "kube-apiserver-addons-134601" in "kube-system" namespace to be "Ready" ...
	I0617 11:37:15.605302  691879 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-134601" in "kube-system" namespace to be "Ready" ...
	I0617 11:37:15.611115  691879 pod_ready.go:92] pod "kube-controller-manager-addons-134601" in "kube-system" namespace has status "Ready":"True"
	I0617 11:37:15.611191  691879 pod_ready.go:81] duration metric: took 5.865571ms for pod "kube-controller-manager-addons-134601" in "kube-system" namespace to be "Ready" ...
	I0617 11:37:15.611218  691879 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8dp6r" in "kube-system" namespace to be "Ready" ...
	I0617 11:37:15.617355  691879 pod_ready.go:92] pod "kube-proxy-8dp6r" in "kube-system" namespace has status "Ready":"True"
	I0617 11:37:15.617385  691879 pod_ready.go:81] duration metric: took 6.144572ms for pod "kube-proxy-8dp6r" in "kube-system" namespace to be "Ready" ...
	I0617 11:37:15.617396  691879 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-134601" in "kube-system" namespace to be "Ready" ...
	I0617 11:37:15.852321  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:15.988548  691879 pod_ready.go:92] pod "kube-scheduler-addons-134601" in "kube-system" namespace has status "Ready":"True"
	I0617 11:37:15.988574  691879 pod_ready.go:81] duration metric: took 371.170141ms for pod "kube-scheduler-addons-134601" in "kube-system" namespace to be "Ready" ...
	I0617 11:37:15.988588  691879 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-q5vq2" in "kube-system" namespace to be "Ready" ...
	I0617 11:37:16.095265  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:16.099645  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 11:37:16.353267  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:16.595046  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:16.599488  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 11:37:16.849739  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:17.095686  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:17.099932  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 11:37:17.350592  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:17.597165  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:17.600599  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 11:37:17.850651  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:17.997476  691879 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-q5vq2" in "kube-system" namespace has status "Ready":"False"
	I0617 11:37:18.114546  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 11:37:18.121221  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:18.349946  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:18.595991  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:18.599742  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 11:37:18.850617  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:19.097375  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:19.101038  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 11:37:19.350240  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:19.605156  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:19.610537  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 11:37:19.850895  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:20.096975  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:20.103293  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0617 11:37:20.350363  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:20.494549  691879 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-q5vq2" in "kube-system" namespace has status "Ready":"False"
	I0617 11:37:20.595590  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:20.599648  691879 kapi.go:107] duration metric: took 10.005230482s to wait for kubernetes.io/minikube-addons=registry ...
	I0617 11:37:20.850631  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:21.096300  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:21.351515  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:21.604178  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:21.850474  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:22.097692  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:22.351576  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:22.496817  691879 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-q5vq2" in "kube-system" namespace has status "Ready":"False"
	I0617 11:37:22.597579  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:22.852466  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:23.098206  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:23.355369  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:23.600275  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:23.851050  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:24.096697  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:24.350341  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:24.596098  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:24.849980  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:24.995027  691879 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-q5vq2" in "kube-system" namespace has status "Ready":"False"
	I0617 11:37:25.096765  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:25.350177  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:25.595998  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:25.851778  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:26.097641  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:26.350119  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:26.597186  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:26.850765  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:26.996141  691879 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-q5vq2" in "kube-system" namespace has status "Ready":"False"
	I0617 11:37:27.098383  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:27.350435  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:27.595954  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:27.850546  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:28.096368  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:28.351304  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:28.598680  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:28.850079  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:29.096378  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:29.349773  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:29.496265  691879 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-q5vq2" in "kube-system" namespace has status "Ready":"False"
	I0617 11:37:29.610950  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:29.852177  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:30.110952  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:30.358777  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:30.595591  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:30.851897  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:31.096053  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:31.351461  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:31.596376  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:31.850629  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:31.996686  691879 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-q5vq2" in "kube-system" namespace has status "Ready":"False"
	I0617 11:37:32.097642  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:32.350443  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:32.595728  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:32.850193  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:33.097045  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:33.364035  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:33.596202  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:33.852514  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:34.095883  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:34.349378  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:34.495067  691879 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-q5vq2" in "kube-system" namespace has status "Ready":"False"
	I0617 11:37:34.595521  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:34.849941  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:35.096382  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:35.349556  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:35.596360  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:35.849695  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:36.096047  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:36.350503  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:36.503938  691879 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-q5vq2" in "kube-system" namespace has status "Ready":"False"
	I0617 11:37:36.595820  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:36.856160  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:37.096036  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:37.350032  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:37.599515  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:37.850005  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:38.095262  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:38.350181  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:38.595601  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:38.850294  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:38.995784  691879 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-q5vq2" in "kube-system" namespace has status "Ready":"False"
	I0617 11:37:39.096237  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:39.350168  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:39.595524  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:39.856494  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:40.096715  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:40.350015  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:40.596386  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:40.850502  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:41.098579  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:41.349862  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:41.521826  691879 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-q5vq2" in "kube-system" namespace has status "Ready":"False"
	I0617 11:37:41.597073  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:41.851263  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:42.098520  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:42.354980  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:42.596491  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:42.850009  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:43.096885  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:43.350094  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:43.595810  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:43.850076  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:43.994552  691879 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-q5vq2" in "kube-system" namespace has status "Ready":"False"
	I0617 11:37:44.095798  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:44.349846  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:44.595941  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:44.851506  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:45.099634  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:45.357710  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:45.597045  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:45.850885  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:45.995180  691879 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-q5vq2" in "kube-system" namespace has status "Ready":"False"
	I0617 11:37:46.095912  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:46.355608  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:46.596489  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:46.850568  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:47.096143  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:47.350214  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:47.596028  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:47.852841  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:47.996810  691879 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-q5vq2" in "kube-system" namespace has status "Ready":"False"
	I0617 11:37:48.096692  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:48.349624  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:48.600598  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:48.858396  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:49.010000  691879 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-q5vq2" in "kube-system" namespace has status "Ready":"True"
	I0617 11:37:49.010071  691879 pod_ready.go:81] duration metric: took 33.021448392s for pod "nvidia-device-plugin-daemonset-q5vq2" in "kube-system" namespace to be "Ready" ...
	I0617 11:37:49.010095  691879 pod_ready.go:38] duration metric: took 46.94184029s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 11:37:49.010142  691879 api_server.go:52] waiting for apiserver process to appear ...
	I0617 11:37:49.010240  691879 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 11:37:49.037085  691879 api_server.go:72] duration metric: took 49.475432504s to wait for apiserver process to appear ...
	I0617 11:37:49.037112  691879 api_server.go:88] waiting for apiserver healthz status ...
	I0617 11:37:49.037134  691879 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0617 11:37:49.045042  691879 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0617 11:37:49.046083  691879 api_server.go:141] control plane version: v1.30.1
	I0617 11:37:49.046141  691879 api_server.go:131] duration metric: took 9.020845ms to wait for apiserver health ...
	I0617 11:37:49.046164  691879 system_pods.go:43] waiting for kube-system pods to appear ...
	I0617 11:37:49.058276  691879 system_pods.go:59] 18 kube-system pods found
	I0617 11:37:49.058352  691879 system_pods.go:61] "coredns-7db6d8ff4d-rcpbp" [4d131475-22c0-4162-a13c-060d85713663] Running
	I0617 11:37:49.058373  691879 system_pods.go:61] "csi-hostpath-attacher-0" [082f66f2-278d-4b0c-8fa5-071f7b8a7bd0] Running
	I0617 11:37:49.058393  691879 system_pods.go:61] "csi-hostpath-resizer-0" [f97050b0-96f1-4ce8-b3d8-97229b4bf912] Running
	I0617 11:37:49.058428  691879 system_pods.go:61] "csi-hostpathplugin-brjpj" [56edcaa4-1c5d-4fef-9182-fac36192a21f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0617 11:37:49.058456  691879 system_pods.go:61] "etcd-addons-134601" [1facb1fa-1533-4e70-aa63-af88db297139] Running
	I0617 11:37:49.058480  691879 system_pods.go:61] "kindnet-j89dc" [840873ce-0d21-4437-b6d1-179226e9f7da] Running
	I0617 11:37:49.058517  691879 system_pods.go:61] "kube-apiserver-addons-134601" [a614f4ca-c148-42ed-a2f3-9358f116ec5a] Running
	I0617 11:37:49.058536  691879 system_pods.go:61] "kube-controller-manager-addons-134601" [6678d4be-e2bb-41f7-a556-8c7a905e9d99] Running
	I0617 11:37:49.058560  691879 system_pods.go:61] "kube-ingress-dns-minikube" [1a0d9cf5-35cb-4250-8fe7-9e6ff7aadad9] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0617 11:37:49.058596  691879 system_pods.go:61] "kube-proxy-8dp6r" [e9a7b296-7d12-497c-b893-a917b754a866] Running
	I0617 11:37:49.058623  691879 system_pods.go:61] "kube-scheduler-addons-134601" [334f79fc-b405-4a33-8a3a-516acd4ab87a] Running
	I0617 11:37:49.058645  691879 system_pods.go:61] "metrics-server-c59844bb4-q8m7p" [ae6a5fc6-8011-4441-a047-37a05043782d] Running
	I0617 11:37:49.058670  691879 system_pods.go:61] "nvidia-device-plugin-daemonset-q5vq2" [670a7d96-6767-4d7a-b66e-430d9fd9ea84] Running
	I0617 11:37:49.058701  691879 system_pods.go:61] "registry-kb4t9" [7de14dcf-fed8-4a0e-80ba-1bb85acaa099] Running
	I0617 11:37:49.058725  691879 system_pods.go:61] "registry-proxy-8q9kp" [2f6405e9-dc4d-4d13-8f69-a273afd74af7] Running
	I0617 11:37:49.058747  691879 system_pods.go:61] "snapshot-controller-745499f584-hj4lb" [801c7238-6beb-4469-8aba-5360c367f482] Running
	I0617 11:37:49.058772  691879 system_pods.go:61] "snapshot-controller-745499f584-qlkj6" [6dcf3350-4d26-4e01-aef9-5b641e7bd68a] Running
	I0617 11:37:49.058806  691879 system_pods.go:61] "storage-provisioner" [b82115e2-7802-46c5-8b7c-b6b278fc6ce1] Running
	I0617 11:37:49.058832  691879 system_pods.go:74] duration metric: took 12.648955ms to wait for pod list to return data ...
	I0617 11:37:49.058855  691879 default_sa.go:34] waiting for default service account to be created ...
	I0617 11:37:49.062087  691879 default_sa.go:45] found service account: "default"
	I0617 11:37:49.062108  691879 default_sa.go:55] duration metric: took 3.230694ms for default service account to be created ...
	I0617 11:37:49.062116  691879 system_pods.go:116] waiting for k8s-apps to be running ...
	I0617 11:37:49.074064  691879 system_pods.go:86] 18 kube-system pods found
	I0617 11:37:49.074136  691879 system_pods.go:89] "coredns-7db6d8ff4d-rcpbp" [4d131475-22c0-4162-a13c-060d85713663] Running
	I0617 11:37:49.074160  691879 system_pods.go:89] "csi-hostpath-attacher-0" [082f66f2-278d-4b0c-8fa5-071f7b8a7bd0] Running
	I0617 11:37:49.074183  691879 system_pods.go:89] "csi-hostpath-resizer-0" [f97050b0-96f1-4ce8-b3d8-97229b4bf912] Running
	I0617 11:37:49.074231  691879 system_pods.go:89] "csi-hostpathplugin-brjpj" [56edcaa4-1c5d-4fef-9182-fac36192a21f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0617 11:37:49.074254  691879 system_pods.go:89] "etcd-addons-134601" [1facb1fa-1533-4e70-aa63-af88db297139] Running
	I0617 11:37:49.074279  691879 system_pods.go:89] "kindnet-j89dc" [840873ce-0d21-4437-b6d1-179226e9f7da] Running
	I0617 11:37:49.074311  691879 system_pods.go:89] "kube-apiserver-addons-134601" [a614f4ca-c148-42ed-a2f3-9358f116ec5a] Running
	I0617 11:37:49.074338  691879 system_pods.go:89] "kube-controller-manager-addons-134601" [6678d4be-e2bb-41f7-a556-8c7a905e9d99] Running
	I0617 11:37:49.074365  691879 system_pods.go:89] "kube-ingress-dns-minikube" [1a0d9cf5-35cb-4250-8fe7-9e6ff7aadad9] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0617 11:37:49.074390  691879 system_pods.go:89] "kube-proxy-8dp6r" [e9a7b296-7d12-497c-b893-a917b754a866] Running
	I0617 11:37:49.074426  691879 system_pods.go:89] "kube-scheduler-addons-134601" [334f79fc-b405-4a33-8a3a-516acd4ab87a] Running
	I0617 11:37:49.074455  691879 system_pods.go:89] "metrics-server-c59844bb4-q8m7p" [ae6a5fc6-8011-4441-a047-37a05043782d] Running
	I0617 11:37:49.074480  691879 system_pods.go:89] "nvidia-device-plugin-daemonset-q5vq2" [670a7d96-6767-4d7a-b66e-430d9fd9ea84] Running
	I0617 11:37:49.074505  691879 system_pods.go:89] "registry-kb4t9" [7de14dcf-fed8-4a0e-80ba-1bb85acaa099] Running
	I0617 11:37:49.074538  691879 system_pods.go:89] "registry-proxy-8q9kp" [2f6405e9-dc4d-4d13-8f69-a273afd74af7] Running
	I0617 11:37:49.074566  691879 system_pods.go:89] "snapshot-controller-745499f584-hj4lb" [801c7238-6beb-4469-8aba-5360c367f482] Running
	I0617 11:37:49.074588  691879 system_pods.go:89] "snapshot-controller-745499f584-qlkj6" [6dcf3350-4d26-4e01-aef9-5b641e7bd68a] Running
	I0617 11:37:49.074613  691879 system_pods.go:89] "storage-provisioner" [b82115e2-7802-46c5-8b7c-b6b278fc6ce1] Running
	I0617 11:37:49.074650  691879 system_pods.go:126] duration metric: took 12.528686ms to wait for k8s-apps to be running ...
	I0617 11:37:49.074678  691879 system_svc.go:44] waiting for kubelet service to be running ....
	I0617 11:37:49.074760  691879 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 11:37:49.088724  691879 system_svc.go:56] duration metric: took 14.038141ms WaitForService to wait for kubelet
	I0617 11:37:49.088800  691879 kubeadm.go:576] duration metric: took 49.527151931s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0617 11:37:49.088853  691879 node_conditions.go:102] verifying NodePressure condition ...
	I0617 11:37:49.092613  691879 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0617 11:37:49.092688  691879 node_conditions.go:123] node cpu capacity is 2
	I0617 11:37:49.092716  691879 node_conditions.go:105] duration metric: took 3.839542ms to run NodePressure ...
	I0617 11:37:49.092743  691879 start.go:240] waiting for startup goroutines ...
	I0617 11:37:49.099003  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:49.352117  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:49.602970  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:49.850768  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:50.120325  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:50.359741  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:50.599116  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:50.850444  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:51.098143  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:51.351224  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:51.596972  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:51.850710  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:52.095342  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:52.350501  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:52.595779  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:52.850239  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:53.096981  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:53.351088  691879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0617 11:37:53.596584  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:53.857200  691879 kapi.go:107] duration metric: took 45.011822414s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0617 11:37:54.102819  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:54.430936  691879 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0617 11:37:54.430962  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 11:37:54.596371  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:54.908780  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 11:37:55.098483  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:55.408329  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 11:37:55.601235  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:55.907571  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 11:37:56.098372  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:56.408554  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 11:37:56.598272  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:56.908786  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 11:37:57.095631  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:57.408046  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 11:37:57.596305  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:57.907530  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 11:37:58.095647  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:58.407770  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 11:37:58.595392  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:58.908521  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 11:37:59.096130  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:59.408647  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 11:37:59.595226  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0617 11:37:59.908152  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 11:38:00.100640  691879 kapi.go:107] duration metric: took 49.014027342s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0617 11:38:00.408056  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 11:38:00.908172  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 11:38:01.407808  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 11:38:01.907871  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 11:38:02.408546  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 11:38:02.907612  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 11:38:03.408400  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 11:38:03.908102  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 11:38:04.407974  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 11:38:04.907536  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 11:38:05.407940  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 11:38:05.907685  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 11:38:06.407546  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 11:38:06.908545  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 11:38:07.407355  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 11:38:07.907831  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 11:38:08.408133  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 11:38:08.907753  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 11:38:09.408499  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 11:38:09.907659  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 11:38:10.408049  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 11:38:10.914541  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 11:38:11.408198  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 11:38:11.907396  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 11:38:12.408426  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 11:38:12.908247  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 11:38:13.407308  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 11:38:13.907590  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 11:38:14.408316  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 11:38:14.907830  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 11:38:15.415463  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 11:38:15.909014  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 11:38:16.408398  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 11:38:16.907719  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 11:38:17.408198  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 11:38:17.907643  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 11:38:18.408517  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 11:38:18.918073  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 11:38:19.407822  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 11:38:19.907678  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 11:38:20.407551  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 11:38:20.907993  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 11:38:21.408420  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 11:38:21.907717  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 11:38:22.408464  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 11:38:22.908507  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 11:38:23.407873  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 11:38:23.907573  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 11:38:24.409089  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 11:38:24.908857  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 11:38:25.408055  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 11:38:25.908471  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 11:38:26.408289  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 11:38:26.907974  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 11:38:27.409137  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 11:38:27.908072  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 11:38:28.407489  691879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0617 11:38:28.908345  691879 kapi.go:107] duration metric: took 1m16.504073825s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0617 11:38:28.910378  691879 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-134601 cluster.
	I0617 11:38:28.912021  691879 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0617 11:38:28.914098  691879 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0617 11:38:28.915806  691879 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner, ingress-dns, nvidia-device-plugin, default-storageclass, volcano, metrics-server, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0617 11:38:28.917555  691879 addons.go:510] duration metric: took 1m29.355595505s for enable addons: enabled=[cloud-spanner storage-provisioner ingress-dns nvidia-device-plugin default-storageclass volcano metrics-server inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0617 11:38:28.917613  691879 start.go:245] waiting for cluster config update ...
	I0617 11:38:28.917639  691879 start.go:254] writing updated cluster config ...
	I0617 11:38:28.917969  691879 ssh_runner.go:195] Run: rm -f paused
	I0617 11:38:29.237022  691879 start.go:600] kubectl: 1.30.2, cluster: 1.30.1 (minor skew: 0)
	I0617 11:38:29.239556  691879 out.go:177] * Done! kubectl is now configured to use "addons-134601" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f12f553082aaf       dd1b12fcb6097       9 seconds ago       Exited              hello-world-app           2                   5307790b52f19       hello-world-app-86c47465fc-ct8bk
	b6a0f77c2e369       11ceee7cdc572       31 seconds ago      Running             nginx                     0                   4f2764544a018       test-job-nginx-0
	80e60f5317480       4f49228258b64       31 seconds ago      Running             nginx                     0                   0742d29f0e2e8       nginx
	aeb66cd2e7488       9e1a67634369d       3 minutes ago       Running             headlamp                  0                   1931325846d30       headlamp-7fc69f7444-7sq4b
	dab9cffb11023       6ef582f3ec844       3 minutes ago       Running             gcp-auth                  0                   33823b7628785       gcp-auth-5db96cd9b4-4xrk9
	61d5a9fceacac       296b5f799fcd8       4 minutes ago       Exited              patch                     0                   26a5d1f1df32d       ingress-nginx-admission-patch-6ml2z
	982af2171b80d       296b5f799fcd8       4 minutes ago       Exited              create                    0                   7e2988965a166       ingress-nginx-admission-create-tzjrd
	2abc3a78739cc       20e3f2db01e81       4 minutes ago       Running             yakd                      0                   1775a443b3da0       yakd-dashboard-5ddbf7d777-9bcsv
	c7f63f903b7da       2437cf7621777       4 minutes ago       Running             coredns                   0                   3d2af0a084bc4       coredns-7db6d8ff4d-rcpbp
	8aa65f85a74a1       ba04bb24b9575       4 minutes ago       Running             storage-provisioner       0                   b47ff04e3ef36       storage-provisioner
	53f6456d3e890       89d73d416b992       4 minutes ago       Running             kindnet-cni               0                   7a7f60c8de029       kindnet-j89dc
	b94d683b305ae       05eccb821e159       4 minutes ago       Running             kube-proxy                0                   533630cf2c077       kube-proxy-8dp6r
	f94e85b7dc638       988b55d423baf       4 minutes ago       Running             kube-apiserver            0                   3d65bed900158       kube-apiserver-addons-134601
	4de1d887d8c35       234ac56e455be       4 minutes ago       Running             kube-controller-manager   0                   c2d9d63c4b164       kube-controller-manager-addons-134601
	ca86d5880b83c       163ff818d154d       4 minutes ago       Running             kube-scheduler            0                   0a8999a3690ec       kube-scheduler-addons-134601
	2f8d6cf85e07f       014faa467e297       4 minutes ago       Running             etcd                      0                   ba93be9aa6201       etcd-addons-134601
	
	
	==> containerd <==
	Jun 17 11:41:28 addons-134601 containerd[767]: time="2024-06-17T11:41:28.139574663Z" level=info msg="TearDown network for sandbox \"1c0724a441e7d8ed77cefb5691a808628e99ff66164cb06e4993e65958643fdc\" successfully"
	Jun 17 11:41:28 addons-134601 containerd[767]: time="2024-06-17T11:41:28.139604496Z" level=info msg="StopPodSandbox for \"1c0724a441e7d8ed77cefb5691a808628e99ff66164cb06e4993e65958643fdc\" returns successfully"
	Jun 17 11:41:28 addons-134601 containerd[767]: time="2024-06-17T11:41:28.654150872Z" level=info msg="RemoveContainer for \"96569571b2ed206fc4321b3992f95415414a9d49633dae9ede9ecc0e9dc3da10\""
	Jun 17 11:41:28 addons-134601 containerd[767]: time="2024-06-17T11:41:28.668531634Z" level=info msg="RemoveContainer for \"96569571b2ed206fc4321b3992f95415414a9d49633dae9ede9ecc0e9dc3da10\" returns successfully"
	Jun 17 11:41:28 addons-134601 containerd[767]: time="2024-06-17T11:41:28.670689837Z" level=info msg="RemoveContainer for \"e4b1a6a232b7d969dd35d4d4131761d44506f75970875c416283bc84853841ca\""
	Jun 17 11:41:28 addons-134601 containerd[767]: time="2024-06-17T11:41:28.685177608Z" level=info msg="RemoveContainer for \"e4b1a6a232b7d969dd35d4d4131761d44506f75970875c416283bc84853841ca\" returns successfully"
	Jun 17 11:41:30 addons-134601 containerd[767]: time="2024-06-17T11:41:30.334850702Z" level=info msg="StopContainer for \"a1fdf7ba7ad638253ef6e86be02fddb519226cb287f0272460753f5c92269eec\" with timeout 2 (s)"
	Jun 17 11:41:30 addons-134601 containerd[767]: time="2024-06-17T11:41:30.335618873Z" level=info msg="Stop container \"a1fdf7ba7ad638253ef6e86be02fddb519226cb287f0272460753f5c92269eec\" with signal terminated"
	Jun 17 11:41:32 addons-134601 containerd[767]: time="2024-06-17T11:41:32.342302800Z" level=info msg="Kill container \"a1fdf7ba7ad638253ef6e86be02fddb519226cb287f0272460753f5c92269eec\""
	Jun 17 11:41:32 addons-134601 containerd[767]: time="2024-06-17T11:41:32.413890377Z" level=info msg="shim disconnected" id=a1fdf7ba7ad638253ef6e86be02fddb519226cb287f0272460753f5c92269eec
	Jun 17 11:41:32 addons-134601 containerd[767]: time="2024-06-17T11:41:32.413959758Z" level=warning msg="cleaning up after shim disconnected" id=a1fdf7ba7ad638253ef6e86be02fddb519226cb287f0272460753f5c92269eec namespace=k8s.io
	Jun 17 11:41:32 addons-134601 containerd[767]: time="2024-06-17T11:41:32.413973042Z" level=info msg="cleaning up dead shim"
	Jun 17 11:41:32 addons-134601 containerd[767]: time="2024-06-17T11:41:32.423779527Z" level=warning msg="cleanup warnings time=\"2024-06-17T11:41:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=11806 runtime=io.containerd.runc.v2\n"
	Jun 17 11:41:32 addons-134601 containerd[767]: time="2024-06-17T11:41:32.426473536Z" level=info msg="StopContainer for \"a1fdf7ba7ad638253ef6e86be02fddb519226cb287f0272460753f5c92269eec\" returns successfully"
	Jun 17 11:41:32 addons-134601 containerd[767]: time="2024-06-17T11:41:32.427077387Z" level=info msg="StopPodSandbox for \"8632c33e0badb2cbf99dbb2d9f82eb85e79af4bc413aa0b5f9aeddb2a8703c2c\""
	Jun 17 11:41:32 addons-134601 containerd[767]: time="2024-06-17T11:41:32.427132630Z" level=info msg="Container to stop \"a1fdf7ba7ad638253ef6e86be02fddb519226cb287f0272460753f5c92269eec\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Jun 17 11:41:32 addons-134601 containerd[767]: time="2024-06-17T11:41:32.460197120Z" level=info msg="shim disconnected" id=8632c33e0badb2cbf99dbb2d9f82eb85e79af4bc413aa0b5f9aeddb2a8703c2c
	Jun 17 11:41:32 addons-134601 containerd[767]: time="2024-06-17T11:41:32.460264597Z" level=warning msg="cleaning up after shim disconnected" id=8632c33e0badb2cbf99dbb2d9f82eb85e79af4bc413aa0b5f9aeddb2a8703c2c namespace=k8s.io
	Jun 17 11:41:32 addons-134601 containerd[767]: time="2024-06-17T11:41:32.460278907Z" level=info msg="cleaning up dead shim"
	Jun 17 11:41:32 addons-134601 containerd[767]: time="2024-06-17T11:41:32.467978109Z" level=warning msg="cleanup warnings time=\"2024-06-17T11:41:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=11837 runtime=io.containerd.runc.v2\n"
	Jun 17 11:41:32 addons-134601 containerd[767]: time="2024-06-17T11:41:32.512842473Z" level=info msg="TearDown network for sandbox \"8632c33e0badb2cbf99dbb2d9f82eb85e79af4bc413aa0b5f9aeddb2a8703c2c\" successfully"
	Jun 17 11:41:32 addons-134601 containerd[767]: time="2024-06-17T11:41:32.512880962Z" level=info msg="StopPodSandbox for \"8632c33e0badb2cbf99dbb2d9f82eb85e79af4bc413aa0b5f9aeddb2a8703c2c\" returns successfully"
	Jun 17 11:41:32 addons-134601 containerd[767]: time="2024-06-17T11:41:32.679116743Z" level=info msg="RemoveContainer for \"a1fdf7ba7ad638253ef6e86be02fddb519226cb287f0272460753f5c92269eec\""
	Jun 17 11:41:32 addons-134601 containerd[767]: time="2024-06-17T11:41:32.684950602Z" level=info msg="RemoveContainer for \"a1fdf7ba7ad638253ef6e86be02fddb519226cb287f0272460753f5c92269eec\" returns successfully"
	Jun 17 11:41:32 addons-134601 containerd[767]: time="2024-06-17T11:41:32.685519507Z" level=error msg="ContainerStatus for \"a1fdf7ba7ad638253ef6e86be02fddb519226cb287f0272460753f5c92269eec\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a1fdf7ba7ad638253ef6e86be02fddb519226cb287f0272460753f5c92269eec\": not found"
	
	
	==> coredns [c7f63f903b7dad5086bd3290c28145b8bf197be69aaf3235827d3e0a13cf4e01] <==
	[INFO] 10.244.0.20:43141 - 1285 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000080088s
	[INFO] 10.244.0.20:41503 - 45174 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002784066s
	[INFO] 10.244.0.20:43141 - 35660 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001408427s
	[INFO] 10.244.0.20:43141 - 47004 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001520785s
	[INFO] 10.244.0.20:41503 - 31218 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001894668s
	[INFO] 10.244.0.20:43141 - 25727 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000148583s
	[INFO] 10.244.0.20:41503 - 8729 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000286959s
	[INFO] 10.244.0.20:47878 - 26467 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000153309s
	[INFO] 10.244.0.20:56284 - 26367 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00025496s
	[INFO] 10.244.0.20:47878 - 39052 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000078833s
	[INFO] 10.244.0.20:47878 - 26604 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000123804s
	[INFO] 10.244.0.20:47878 - 61782 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000051289s
	[INFO] 10.244.0.20:56284 - 22317 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000055473s
	[INFO] 10.244.0.20:56284 - 40198 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000276916s
	[INFO] 10.244.0.20:56284 - 2880 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000116174s
	[INFO] 10.244.0.20:47878 - 26933 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000061438s
	[INFO] 10.244.0.20:56284 - 65127 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000081311s
	[INFO] 10.244.0.20:56284 - 4070 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000054653s
	[INFO] 10.244.0.20:47878 - 260 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000081139s
	[INFO] 10.244.0.20:56284 - 48748 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001417739s
	[INFO] 10.244.0.20:47878 - 39010 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001031771s
	[INFO] 10.244.0.20:56284 - 7221 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001234277s
	[INFO] 10.244.0.20:47878 - 15508 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001800048s
	[INFO] 10.244.0.20:56284 - 12853 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000067527s
	[INFO] 10.244.0.20:47878 - 27430 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000107986s
	
	
	==> describe nodes <==
	Name:               addons-134601
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-134601
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6835d8632d8e28da57a827eb12d7b852b17a9f6
	                    minikube.k8s.io/name=addons-134601
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_17T11_36_46_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-134601
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jun 2024 11:36:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-134601
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jun 2024 11:41:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jun 2024 11:41:21 +0000   Mon, 17 Jun 2024 11:36:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jun 2024 11:41:21 +0000   Mon, 17 Jun 2024 11:36:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jun 2024 11:41:21 +0000   Mon, 17 Jun 2024 11:36:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jun 2024 11:41:21 +0000   Mon, 17 Jun 2024 11:36:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-134601
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022356Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022356Ki
	  pods:               110
	System Info:
	  Machine ID:                 8af08a43d0fe435e8dc85025eae20e3f
	  System UUID:                5fb4b37d-72f4-429e-8408-6674f979d6cc
	  Boot ID:                    10e5c427-da39-4514-92df-ee3f91ef093f
	  Kernel Version:             5.15.0-1063-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.33
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-86c47465fc-ct8bk         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         34s
	  gcp-auth                    gcp-auth-5db96cd9b4-4xrk9                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m43s
	  headlamp                    headlamp-7fc69f7444-7sq4b                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m7s
	  kube-system                 coredns-7db6d8ff4d-rcpbp                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     4m38s
	  kube-system                 etcd-addons-134601                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         4m51s
	  kube-system                 kindnet-j89dc                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m38s
	  kube-system                 kube-apiserver-addons-134601             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m52s
	  kube-system                 kube-controller-manager-addons-134601    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m51s
	  kube-system                 kube-proxy-8dp6r                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m38s
	  kube-system                 kube-scheduler-addons-134601             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m51s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m33s
	  my-volcano                  test-job-nginx-0                         1 (50%!)(MISSING)       1 (50%!)(MISSING)     0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m40s
	  yakd-dashboard              yakd-dashboard-5ddbf7d777-9bcsv          0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     4m31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1850m (92%!)(MISSING)  1100m (55%!)(MISSING)
	  memory             348Mi (4%!)(MISSING)   476Mi (6%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m36s  kube-proxy       
	  Normal  Starting                 4m52s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m52s  kubelet          Node addons-134601 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m52s  kubelet          Node addons-134601 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m52s  kubelet          Node addons-134601 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             4m52s  kubelet          Node addons-134601 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  4m52s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m41s  kubelet          Node addons-134601 status is now: NodeReady
	  Normal  RegisteredNode           4m38s  node-controller  Node addons-134601 event: Registered Node addons-134601 in Controller
	
	
	==> dmesg <==
	[  +0.001027] FS-Cache: O-key=[8] '096fed0000000000'
	[  +0.000700] FS-Cache: N-cookie c=0000001e [p=00000015 fl=2 nc=0 na=1]
	[  +0.000954] FS-Cache: N-cookie d=00000000f5be1c10{9p.inode} n=00000000d77b19c4
	[  +0.001040] FS-Cache: N-key=[8] '096fed0000000000'
	[  +0.003119] FS-Cache: Duplicate cookie detected
	[  +0.000692] FS-Cache: O-cookie c=00000018 [p=00000015 fl=226 nc=0 na=1]
	[  +0.000977] FS-Cache: O-cookie d=00000000f5be1c10{9p.inode} n=00000000cac8d902
	[  +0.001034] FS-Cache: O-key=[8] '096fed0000000000'
	[  +0.000711] FS-Cache: N-cookie c=0000001f [p=00000015 fl=2 nc=0 na=1]
	[  +0.000939] FS-Cache: N-cookie d=00000000f5be1c10{9p.inode} n=00000000e03cf592
	[  +0.001051] FS-Cache: N-key=[8] '096fed0000000000'
	[  +2.665516] FS-Cache: Duplicate cookie detected
	[  +0.000684] FS-Cache: O-cookie c=00000016 [p=00000015 fl=226 nc=0 na=1]
	[  +0.000934] FS-Cache: O-cookie d=00000000f5be1c10{9p.inode} n=0000000013106528
	[  +0.001050] FS-Cache: O-key=[8] '086fed0000000000'
	[  +0.000693] FS-Cache: N-cookie c=00000021 [p=00000015 fl=2 nc=0 na=1]
	[  +0.000905] FS-Cache: N-cookie d=00000000f5be1c10{9p.inode} n=0000000079a885a5
	[  +0.001017] FS-Cache: N-key=[8] '086fed0000000000'
	[  +0.283712] FS-Cache: Duplicate cookie detected
	[  +0.000687] FS-Cache: O-cookie c=0000001b [p=00000015 fl=226 nc=0 na=1]
	[  +0.000989] FS-Cache: O-cookie d=00000000f5be1c10{9p.inode} n=00000000c764ff76
	[  +0.001042] FS-Cache: O-key=[8] '0e6fed0000000000'
	[  +0.000702] FS-Cache: N-cookie c=00000022 [p=00000015 fl=2 nc=0 na=1]
	[  +0.000927] FS-Cache: N-cookie d=00000000f5be1c10{9p.inode} n=0000000066e1ff9a
	[  +0.001048] FS-Cache: N-key=[8] '0e6fed0000000000'
	
	
	==> etcd [2f8d6cf85e07f3e79e19823a63aa29a57dbf7f95d9973f7b470542bc7f65ef3c] <==
	{"level":"info","ts":"2024-06-17T11:36:39.400096Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-06-17T11:36:39.400192Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-06-17T11:36:39.411145Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-06-17T11:36:39.411349Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-06-17T11:36:39.411374Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-06-17T11:36:39.411471Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-06-17T11:36:39.411487Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-06-17T11:36:39.787484Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-06-17T11:36:39.78759Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-06-17T11:36:39.787636Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-06-17T11:36:39.787685Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-06-17T11:36:39.78772Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-06-17T11:36:39.787777Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-06-17T11:36:39.787839Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-06-17T11:36:39.791669Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-134601 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-17T11:36:39.791869Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-17T11:36:39.792213Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-17T11:36:39.792358Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-17T11:36:39.792548Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-17T11:36:39.792593Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-17T11:36:39.794132Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-17T11:36:39.800917Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-06-17T11:36:39.800991Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-17T11:36:39.843756Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-17T11:36:39.843876Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> gcp-auth [dab9cffb11023ee47a311724e12224e0f1a8ccde8552c7cee1a1246472156360] <==
	2024/06/17 11:38:30 Ready to write response ...
	2024/06/17 11:38:30 Ready to marshal response ...
	2024/06/17 11:38:30 Ready to write response ...
	2024/06/17 11:38:30 Ready to marshal response ...
	2024/06/17 11:38:30 Ready to write response ...
	2024/06/17 11:38:40 Ready to marshal response ...
	2024/06/17 11:38:40 Ready to write response ...
	2024/06/17 11:38:56 Ready to marshal response ...
	2024/06/17 11:38:56 Ready to write response ...
	2024/06/17 11:38:57 Ready to marshal response ...
	2024/06/17 11:38:57 Ready to write response ...
	2024/06/17 11:38:57 Ready to marshal response ...
	2024/06/17 11:38:57 Ready to write response ...
	2024/06/17 11:38:57 Ready to marshal response ...
	2024/06/17 11:38:57 Ready to write response ...
	2024/06/17 11:39:04 Ready to marshal response ...
	2024/06/17 11:39:04 Ready to write response ...
	2024/06/17 11:40:15 Ready to marshal response ...
	2024/06/17 11:40:15 Ready to write response ...
	2024/06/17 11:40:30 Ready to marshal response ...
	2024/06/17 11:40:30 Ready to write response ...
	2024/06/17 11:41:03 Ready to marshal response ...
	2024/06/17 11:41:03 Ready to write response ...
	2024/06/17 11:41:12 Ready to marshal response ...
	2024/06/17 11:41:12 Ready to write response ...
	
	
	==> kernel <==
	 11:41:37 up  3:24,  0 users,  load average: 1.06, 1.47, 1.93
	Linux addons-134601 5.15.0-1063-aws #69~20.04.1-Ubuntu SMP Fri May 10 19:21:30 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [53f6456d3e890f3b76b502e2b8e96feb926160aca72356e9f5a62689ce60874d] <==
	I0617 11:39:34.174363       1 main.go:227] handling current node
	I0617 11:39:44.187265       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0617 11:39:44.187291       1 main.go:227] handling current node
	I0617 11:39:54.204307       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0617 11:39:54.204337       1 main.go:227] handling current node
	I0617 11:40:04.217892       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0617 11:40:04.217923       1 main.go:227] handling current node
	I0617 11:40:14.236854       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0617 11:40:14.236889       1 main.go:227] handling current node
	I0617 11:40:24.248005       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0617 11:40:24.248034       1 main.go:227] handling current node
	I0617 11:40:34.252243       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0617 11:40:34.252271       1 main.go:227] handling current node
	I0617 11:40:44.264295       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0617 11:40:44.264324       1 main.go:227] handling current node
	I0617 11:40:54.271800       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0617 11:40:54.271827       1 main.go:227] handling current node
	I0617 11:41:04.287203       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0617 11:41:04.287251       1 main.go:227] handling current node
	I0617 11:41:14.311076       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0617 11:41:14.311111       1 main.go:227] handling current node
	I0617 11:41:24.320800       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0617 11:41:24.320828       1 main.go:227] handling current node
	I0617 11:41:34.337926       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0617 11:41:34.338030       1 main.go:227] handling current node
	
	
	==> kube-apiserver [f94e85b7dc638e3aca11f5a296f3bcd7a10b4e8c0701da795ed293df7e3b26de] <==
	I0617 11:40:46.203794       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0617 11:40:46.203936       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0617 11:40:46.327969       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0617 11:40:46.328005       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0617 11:40:47.130411       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0617 11:40:47.328081       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0617 11:40:47.337742       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0617 11:40:52.009769       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0617 11:40:53.045955       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0617 11:41:03.404130       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0617 11:41:03.656810       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.96.188.7"}
	I0617 11:41:12.326154       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.99.245.45"}
	I0617 11:41:14.239736       1 handler.go:286] Adding GroupVersion batch.volcano.sh v1alpha1 to ResourceManager
	I0617 11:41:14.317436       1 handler.go:286] Adding GroupVersion bus.volcano.sh v1alpha1 to ResourceManager
	I0617 11:41:14.786703       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0617 11:41:14.815006       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0617 11:41:14.866760       1 handler.go:286] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
	I0617 11:41:14.908085       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	W0617 11:41:15.365744       1 cacher.go:168] Terminating all watchers from cacher commands.bus.volcano.sh
	W0617 11:41:15.908690       1 cacher.go:168] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0617 11:41:15.977500       1 cacher.go:168] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0617 11:41:15.982823       1 cacher.go:168] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0617 11:41:16.165343       1 cacher.go:168] Terminating all watchers from cacher jobs.batch.volcano.sh
	I0617 11:41:27.106840       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	E0617 11:41:29.403218       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [4de1d887d8c35f62732d69772df12c19c5c5830780e3feaba11d461addf04d2f] <==
	E0617 11:41:25.468511       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0617 11:41:26.846373       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0617 11:41:26.846557       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0617 11:41:28.683734       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="53.734µs"
	I0617 11:41:29.312689       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create"
	I0617 11:41:29.315294       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-768f948f8f" duration="4.997µs"
	I0617 11:41:29.326046       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	I0617 11:41:29.779924       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0617 11:41:29.779961       1 shared_informer.go:320] Caches are synced for resource quota
	I0617 11:41:30.290425       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0617 11:41:30.290730       1 shared_informer.go:320] Caches are synced for garbage collector
	W0617 11:41:30.639778       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0617 11:41:30.639820       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0617 11:41:31.950981       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0617 11:41:31.951020       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0617 11:41:32.053256       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0617 11:41:32.053358       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0617 11:41:34.355476       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0617 11:41:34.355738       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0617 11:41:34.743846       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0617 11:41:34.743885       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0617 11:41:36.645753       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0617 11:41:36.645806       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0617 11:41:36.781447       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0617 11:41:36.781483       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [b94d683b305ae640dc23f8bc01a8ad1602d1465279bd194f555c8b3fd7cd69bc] <==
	I0617 11:37:01.590460       1 server_linux.go:69] "Using iptables proxy"
	I0617 11:37:01.612624       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0617 11:37:01.684334       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0617 11:37:01.684422       1 server_linux.go:165] "Using iptables Proxier"
	I0617 11:37:01.688460       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0617 11:37:01.688495       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0617 11:37:01.688571       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0617 11:37:01.688831       1 server.go:872] "Version info" version="v1.30.1"
	I0617 11:37:01.688855       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0617 11:37:01.695641       1 config.go:319] "Starting node config controller"
	I0617 11:37:01.695662       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0617 11:37:01.697774       1 config.go:192] "Starting service config controller"
	I0617 11:37:01.697788       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0617 11:37:01.697809       1 config.go:101] "Starting endpoint slice config controller"
	I0617 11:37:01.697814       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0617 11:37:01.797940       1 shared_informer.go:320] Caches are synced for node config
	I0617 11:37:01.797996       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0617 11:37:01.798031       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [ca86d5880b83c7a0eb33cc70423e88eb21f8fa56032af302cd47d56c57ce1406] <==
	W0617 11:36:43.741596       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0617 11:36:43.744488       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0617 11:36:43.744723       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0617 11:36:43.744862       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0617 11:36:43.744865       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0617 11:36:43.745041       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0617 11:36:43.745067       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0617 11:36:43.745046       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0617 11:36:43.744934       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0617 11:36:43.745109       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0617 11:36:43.744980       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0617 11:36:43.745130       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0617 11:36:43.744903       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0617 11:36:43.745153       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0617 11:36:43.745194       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0617 11:36:43.745210       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0617 11:36:43.745270       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0617 11:36:43.745286       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0617 11:36:43.745335       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0617 11:36:43.745350       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0617 11:36:43.744787       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0617 11:36:43.745365       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0617 11:36:43.745540       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0617 11:36:43.745616       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0617 11:36:44.837852       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 17 11:41:17 addons-134601 kubelet[1505]: E0617 11:41:17.619710    1505 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 10s restarting failed container=hello-world-app pod=hello-world-app-86c47465fc-ct8bk_default(f6ccd338-2425-4df1-a9d4-bdb964684e30)\"" pod="default/hello-world-app-86c47465fc-ct8bk" podUID="f6ccd338-2425-4df1-a9d4-bdb964684e30"
	Jun 17 11:41:21 addons-134601 kubelet[1505]: I0617 11:41:21.848222    1505 scope.go:117] "RemoveContainer" containerID="96569571b2ed206fc4321b3992f95415414a9d49633dae9ede9ecc0e9dc3da10"
	Jun 17 11:41:21 addons-134601 kubelet[1505]: E0617 11:41:21.848975    1505 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(1a0d9cf5-35cb-4250-8fe7-9e6ff7aadad9)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="1a0d9cf5-35cb-4250-8fe7-9e6ff7aadad9"
	Jun 17 11:41:27 addons-134601 kubelet[1505]: I0617 11:41:27.848127    1505 scope.go:117] "RemoveContainer" containerID="e4b1a6a232b7d969dd35d4d4131761d44506f75970875c416283bc84853841ca"
	Jun 17 11:41:28 addons-134601 kubelet[1505]: I0617 11:41:28.233710    1505 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wd5zd\" (UniqueName: \"kubernetes.io/projected/1a0d9cf5-35cb-4250-8fe7-9e6ff7aadad9-kube-api-access-wd5zd\") pod \"1a0d9cf5-35cb-4250-8fe7-9e6ff7aadad9\" (UID: \"1a0d9cf5-35cb-4250-8fe7-9e6ff7aadad9\") "
	Jun 17 11:41:28 addons-134601 kubelet[1505]: I0617 11:41:28.235811    1505 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a0d9cf5-35cb-4250-8fe7-9e6ff7aadad9-kube-api-access-wd5zd" (OuterVolumeSpecName: "kube-api-access-wd5zd") pod "1a0d9cf5-35cb-4250-8fe7-9e6ff7aadad9" (UID: "1a0d9cf5-35cb-4250-8fe7-9e6ff7aadad9"). InnerVolumeSpecName "kube-api-access-wd5zd". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jun 17 11:41:28 addons-134601 kubelet[1505]: I0617 11:41:28.334716    1505 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-wd5zd\" (UniqueName: \"kubernetes.io/projected/1a0d9cf5-35cb-4250-8fe7-9e6ff7aadad9-kube-api-access-wd5zd\") on node \"addons-134601\" DevicePath \"\""
	Jun 17 11:41:28 addons-134601 kubelet[1505]: I0617 11:41:28.646814    1505 scope.go:117] "RemoveContainer" containerID="96569571b2ed206fc4321b3992f95415414a9d49633dae9ede9ecc0e9dc3da10"
	Jun 17 11:41:28 addons-134601 kubelet[1505]: I0617 11:41:28.664930    1505 scope.go:117] "RemoveContainer" containerID="f12f553082aaf4600f346119af48bd5a435ede6a5665f5486e5b7358ab17e063"
	Jun 17 11:41:28 addons-134601 kubelet[1505]: E0617 11:41:28.665326    1505 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 20s restarting failed container=hello-world-app pod=hello-world-app-86c47465fc-ct8bk_default(f6ccd338-2425-4df1-a9d4-bdb964684e30)\"" pod="default/hello-world-app-86c47465fc-ct8bk" podUID="f6ccd338-2425-4df1-a9d4-bdb964684e30"
	Jun 17 11:41:28 addons-134601 kubelet[1505]: I0617 11:41:28.668881    1505 scope.go:117] "RemoveContainer" containerID="e4b1a6a232b7d969dd35d4d4131761d44506f75970875c416283bc84853841ca"
	Jun 17 11:41:29 addons-134601 kubelet[1505]: I0617 11:41:29.850218    1505 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="115d4c2f-cce1-4da3-a7c9-b7b58698fbea" path="/var/lib/kubelet/pods/115d4c2f-cce1-4da3-a7c9-b7b58698fbea/volumes"
	Jun 17 11:41:29 addons-134601 kubelet[1505]: I0617 11:41:29.850603    1505 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a0d9cf5-35cb-4250-8fe7-9e6ff7aadad9" path="/var/lib/kubelet/pods/1a0d9cf5-35cb-4250-8fe7-9e6ff7aadad9/volumes"
	Jun 17 11:41:29 addons-134601 kubelet[1505]: I0617 11:41:29.851608    1505 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24873dc8-4ebb-4e01-8878-a39035ba2418" path="/var/lib/kubelet/pods/24873dc8-4ebb-4e01-8878-a39035ba2418/volumes"
	Jun 17 11:41:32 addons-134601 kubelet[1505]: I0617 11:41:32.663035    1505 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/703378dc-b812-48fb-b127-fdb7adc9688c-webhook-cert\") pod \"703378dc-b812-48fb-b127-fdb7adc9688c\" (UID: \"703378dc-b812-48fb-b127-fdb7adc9688c\") "
	Jun 17 11:41:32 addons-134601 kubelet[1505]: I0617 11:41:32.663081    1505 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k87g8\" (UniqueName: \"kubernetes.io/projected/703378dc-b812-48fb-b127-fdb7adc9688c-kube-api-access-k87g8\") pod \"703378dc-b812-48fb-b127-fdb7adc9688c\" (UID: \"703378dc-b812-48fb-b127-fdb7adc9688c\") "
	Jun 17 11:41:32 addons-134601 kubelet[1505]: I0617 11:41:32.666013    1505 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/703378dc-b812-48fb-b127-fdb7adc9688c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "703378dc-b812-48fb-b127-fdb7adc9688c" (UID: "703378dc-b812-48fb-b127-fdb7adc9688c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jun 17 11:41:32 addons-134601 kubelet[1505]: I0617 11:41:32.666323    1505 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/703378dc-b812-48fb-b127-fdb7adc9688c-kube-api-access-k87g8" (OuterVolumeSpecName: "kube-api-access-k87g8") pod "703378dc-b812-48fb-b127-fdb7adc9688c" (UID: "703378dc-b812-48fb-b127-fdb7adc9688c"). InnerVolumeSpecName "kube-api-access-k87g8". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jun 17 11:41:32 addons-134601 kubelet[1505]: I0617 11:41:32.677281    1505 scope.go:117] "RemoveContainer" containerID="a1fdf7ba7ad638253ef6e86be02fddb519226cb287f0272460753f5c92269eec"
	Jun 17 11:41:32 addons-134601 kubelet[1505]: I0617 11:41:32.685226    1505 scope.go:117] "RemoveContainer" containerID="a1fdf7ba7ad638253ef6e86be02fddb519226cb287f0272460753f5c92269eec"
	Jun 17 11:41:32 addons-134601 kubelet[1505]: E0617 11:41:32.685749    1505 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a1fdf7ba7ad638253ef6e86be02fddb519226cb287f0272460753f5c92269eec\": not found" containerID="a1fdf7ba7ad638253ef6e86be02fddb519226cb287f0272460753f5c92269eec"
	Jun 17 11:41:32 addons-134601 kubelet[1505]: I0617 11:41:32.685856    1505 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a1fdf7ba7ad638253ef6e86be02fddb519226cb287f0272460753f5c92269eec"} err="failed to get container status \"a1fdf7ba7ad638253ef6e86be02fddb519226cb287f0272460753f5c92269eec\": rpc error: code = NotFound desc = an error occurred when try to find container \"a1fdf7ba7ad638253ef6e86be02fddb519226cb287f0272460753f5c92269eec\": not found"
	Jun 17 11:41:32 addons-134601 kubelet[1505]: I0617 11:41:32.763579    1505 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/703378dc-b812-48fb-b127-fdb7adc9688c-webhook-cert\") on node \"addons-134601\" DevicePath \"\""
	Jun 17 11:41:32 addons-134601 kubelet[1505]: I0617 11:41:32.763623    1505 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-k87g8\" (UniqueName: \"kubernetes.io/projected/703378dc-b812-48fb-b127-fdb7adc9688c-kube-api-access-k87g8\") on node \"addons-134601\" DevicePath \"\""
	Jun 17 11:41:33 addons-134601 kubelet[1505]: I0617 11:41:33.849960    1505 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="703378dc-b812-48fb-b127-fdb7adc9688c" path="/var/lib/kubelet/pods/703378dc-b812-48fb-b127-fdb7adc9688c/volumes"
	
	
	==> storage-provisioner [8aa65f85a74a1420b1c64e7c927f7f4e6d603c3032a29883f963d2df5938cf18] <==
	I0617 11:37:05.910694       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0617 11:37:05.926252       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0617 11:37:05.926295       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0617 11:37:05.939906       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0617 11:37:05.941934       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e1c8b45c-ce85-409e-a496-4c4ec1557e47", APIVersion:"v1", ResourceVersion:"538", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-134601_ae8194fa-0603-4320-bda2-7ae7a70aeb8e became leader
	I0617 11:37:05.942252       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-134601_ae8194fa-0603-4320-bda2-7ae7a70aeb8e!
	I0617 11:37:06.043648       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-134601_ae8194fa-0603-4320-bda2-7ae7a70aeb8e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-134601 -n addons-134601
helpers_test.go:261: (dbg) Run:  kubectl --context addons-134601 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (35.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 image load --daemon gcr.io/google-containers/addon-resizer:functional-479738 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-479738 image load --daemon gcr.io/google-containers/addon-resizer:functional-479738 --alsologtostderr: (4.483632396s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-479738" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 image load --daemon gcr.io/google-containers/addon-resizer:functional-479738 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-479738 image load --daemon gcr.io/google-containers/addon-resizer:functional-479738 --alsologtostderr: (3.456089067s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-479738" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.583739804s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-479738
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 image load --daemon gcr.io/google-containers/addon-resizer:functional-479738 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-479738 image load --daemon gcr.io/google-containers/addon-resizer:functional-479738 --alsologtostderr: (3.19424935s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-479738" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 image save gcr.io/google-containers/addon-resizer:functional-479738 /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:385: expected "/home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:410: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I0617 11:46:48.940440  726214 out.go:291] Setting OutFile to fd 1 ...
	I0617 11:46:48.941430  726214 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:46:48.941448  726214 out.go:304] Setting ErrFile to fd 2...
	I0617 11:46:48.941455  726214 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:46:48.941835  726214 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19084-685849/.minikube/bin
	I0617 11:46:48.942879  726214 config.go:182] Loaded profile config "functional-479738": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.1
	I0617 11:46:48.943040  726214 config.go:182] Loaded profile config "functional-479738": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.1
	I0617 11:46:48.943786  726214 cli_runner.go:164] Run: docker container inspect functional-479738 --format={{.State.Status}}
	I0617 11:46:48.961535  726214 ssh_runner.go:195] Run: systemctl --version
	I0617 11:46:48.961622  726214 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-479738
	I0617 11:46:48.978161  726214 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33552 SSHKeyPath:/home/jenkins/minikube-integration/19084-685849/.minikube/machines/functional-479738/id_rsa Username:docker}
	I0617 11:46:49.067862  726214 cache_images.go:286] Loading image from: /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar
	W0617 11:46:49.067919  726214 cache_images.go:254] Failed to load cached images for profile functional-479738. make sure the profile is running. loading images: stat /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar: no such file or directory
	I0617 11:46:49.067950  726214 cache_images.go:262] succeeded pushing to: 
	I0617 11:46:49.067956  726214 cache_images.go:263] failed pushing to: functional-479738

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (373.63s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-440919 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0617 12:23:29.263819  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/addons-134601/client.crt: no such file or directory
E0617 12:23:54.380698  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/functional-479738/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-440919 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 80 (6m9.820371813s)

                                                
                                                
-- stdout --
	* [old-k8s-version-440919] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19084
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19084-685849/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19084-685849/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-440919" primary control-plane node in "old-k8s-version-440919" cluster
	* Pulling base image v0.0.44-1718296336-19068 ...
	* Restarting existing docker container for "old-k8s-version-440919" ...
	* Preparing Kubernetes v1.20.0 on containerd 1.6.33 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-440919 addons enable metrics-server
	
	* Enabled addons: metrics-server, storage-provisioner, dashboard, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 12:22:52.157725  885555 out.go:291] Setting OutFile to fd 1 ...
	I0617 12:22:52.158485  885555 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 12:22:52.158514  885555 out.go:304] Setting ErrFile to fd 2...
	I0617 12:22:52.158533  885555 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 12:22:52.158816  885555 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19084-685849/.minikube/bin
	I0617 12:22:52.159225  885555 out.go:298] Setting JSON to false
	I0617 12:22:52.160368  885555 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":14720,"bootTime":1718612253,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1063-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0617 12:22:52.160464  885555 start.go:139] virtualization:  
	I0617 12:22:52.163082  885555 out.go:177] * [old-k8s-version-440919] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0617 12:22:52.165313  885555 out.go:177]   - MINIKUBE_LOCATION=19084
	I0617 12:22:52.171320  885555 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 12:22:52.165427  885555 notify.go:220] Checking for updates...
	I0617 12:22:52.180174  885555 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19084-685849/kubeconfig
	I0617 12:22:52.183082  885555 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19084-685849/.minikube
	I0617 12:22:52.185063  885555 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0617 12:22:52.186892  885555 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0617 12:22:52.189346  885555 config.go:182] Loaded profile config "old-k8s-version-440919": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0617 12:22:52.192737  885555 out.go:177] * Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	I0617 12:22:52.194552  885555 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 12:22:52.215212  885555 docker.go:122] docker version: linux-26.1.4:Docker Engine - Community
	I0617 12:22:52.215335  885555 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0617 12:22:52.321369  885555 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:60 SystemTime:2024-06-17 12:22:52.309017804 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1063-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214892544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d2d58213f83a351ca8f528a95fbd145f5654e957 Expected:d2d58213f83a351ca8f528a95fbd145f5654e957} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.1]] Warnings:<nil>}}
	I0617 12:22:52.321486  885555 docker.go:295] overlay module found
	I0617 12:22:52.325000  885555 out.go:177] * Using the docker driver based on existing profile
	I0617 12:22:52.326701  885555 start.go:297] selected driver: docker
	I0617 12:22:52.326723  885555 start.go:901] validating driver "docker" against &{Name:old-k8s-version-440919 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-440919 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 12:22:52.326850  885555 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0617 12:22:52.327483  885555 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0617 12:22:52.402054  885555 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:59 SystemTime:2024-06-17 12:22:52.393171294 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1063-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214892544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d2d58213f83a351ca8f528a95fbd145f5654e957 Expected:d2d58213f83a351ca8f528a95fbd145f5654e957} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.1]] Warnings:<nil>}}
	I0617 12:22:52.402402  885555 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0617 12:22:52.402439  885555 cni.go:84] Creating CNI manager for ""
	I0617 12:22:52.402452  885555 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0617 12:22:52.402504  885555 start.go:340] cluster config:
	{Name:old-k8s-version-440919 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-440919 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 12:22:52.405023  885555 out.go:177] * Starting "old-k8s-version-440919" primary control-plane node in "old-k8s-version-440919" cluster
	I0617 12:22:52.406583  885555 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0617 12:22:52.408422  885555 out.go:177] * Pulling base image v0.0.44-1718296336-19068 ...
	I0617 12:22:52.410902  885555 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0617 12:22:52.410952  885555 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19084-685849/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0617 12:22:52.410962  885555 cache.go:56] Caching tarball of preloaded images
	I0617 12:22:52.410989  885555 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 in local docker daemon
	I0617 12:22:52.411038  885555 preload.go:173] Found /home/jenkins/minikube-integration/19084-685849/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0617 12:22:52.411046  885555 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0617 12:22:52.411168  885555 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/old-k8s-version-440919/config.json ...
	I0617 12:22:52.450798  885555 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 in local docker daemon, skipping pull
	I0617 12:22:52.450827  885555 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 exists in daemon, skipping load
	I0617 12:22:52.450849  885555 cache.go:194] Successfully downloaded all kic artifacts
	I0617 12:22:52.450902  885555 start.go:360] acquireMachinesLock for old-k8s-version-440919: {Name:mkecd4a8e077d42f837a1811fc484bfa4380c393 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 12:22:52.450967  885555 start.go:364] duration metric: took 36.036µs to acquireMachinesLock for "old-k8s-version-440919"
	I0617 12:22:52.450990  885555 start.go:96] Skipping create...Using existing machine configuration
	I0617 12:22:52.450996  885555 fix.go:54] fixHost starting: 
	I0617 12:22:52.451255  885555 cli_runner.go:164] Run: docker container inspect old-k8s-version-440919 --format={{.State.Status}}
	I0617 12:22:52.467665  885555 fix.go:112] recreateIfNeeded on old-k8s-version-440919: state=Stopped err=<nil>
	W0617 12:22:52.467694  885555 fix.go:138] unexpected machine state, will restart: <nil>
	I0617 12:22:52.469838  885555 out.go:177] * Restarting existing docker container for "old-k8s-version-440919" ...
	I0617 12:22:52.471610  885555 cli_runner.go:164] Run: docker start old-k8s-version-440919
	I0617 12:22:52.832691  885555 cli_runner.go:164] Run: docker container inspect old-k8s-version-440919 --format={{.State.Status}}
	I0617 12:22:52.858338  885555 kic.go:430] container "old-k8s-version-440919" state is running.
	I0617 12:22:52.858722  885555 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-440919
	I0617 12:22:52.879459  885555 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/old-k8s-version-440919/config.json ...
	I0617 12:22:52.879695  885555 machine.go:94] provisionDockerMachine start ...
	I0617 12:22:52.879758  885555 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-440919
	I0617 12:22:52.904353  885555 main.go:141] libmachine: Using SSH client type: native
	I0617 12:22:52.904663  885555 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2bb0] 0x3e5410 <nil>  [] 0s} 127.0.0.1 33832 <nil> <nil>}
	I0617 12:22:52.904679  885555 main.go:141] libmachine: About to run SSH command:
	hostname
	I0617 12:22:52.905465  885555 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0617 12:22:56.035069  885555 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-440919
	
	I0617 12:22:56.035091  885555 ubuntu.go:169] provisioning hostname "old-k8s-version-440919"
	I0617 12:22:56.035154  885555 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-440919
	I0617 12:22:56.057414  885555 main.go:141] libmachine: Using SSH client type: native
	I0617 12:22:56.057690  885555 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2bb0] 0x3e5410 <nil>  [] 0s} 127.0.0.1 33832 <nil> <nil>}
	I0617 12:22:56.057702  885555 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-440919 && echo "old-k8s-version-440919" | sudo tee /etc/hostname
	I0617 12:22:56.204148  885555 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-440919
	
	I0617 12:22:56.204302  885555 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-440919
	I0617 12:22:56.228206  885555 main.go:141] libmachine: Using SSH client type: native
	I0617 12:22:56.228450  885555 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2bb0] 0x3e5410 <nil>  [] 0s} 127.0.0.1 33832 <nil> <nil>}
	I0617 12:22:56.228466  885555 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-440919' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-440919/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-440919' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0617 12:22:56.367900  885555 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0617 12:22:56.367974  885555 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19084-685849/.minikube CaCertPath:/home/jenkins/minikube-integration/19084-685849/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19084-685849/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19084-685849/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19084-685849/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19084-685849/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19084-685849/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19084-685849/.minikube}
	I0617 12:22:56.368047  885555 ubuntu.go:177] setting up certificates
	I0617 12:22:56.368128  885555 provision.go:84] configureAuth start
	I0617 12:22:56.368217  885555 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-440919
	I0617 12:22:56.415867  885555 provision.go:143] copyHostCerts
	I0617 12:22:56.415934  885555 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-685849/.minikube/ca.pem, removing ...
	I0617 12:22:56.415943  885555 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-685849/.minikube/ca.pem
	I0617 12:22:56.416015  885555 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-685849/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19084-685849/.minikube/ca.pem (1078 bytes)
	I0617 12:22:56.416109  885555 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-685849/.minikube/cert.pem, removing ...
	I0617 12:22:56.416114  885555 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-685849/.minikube/cert.pem
	I0617 12:22:56.416140  885555 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-685849/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19084-685849/.minikube/cert.pem (1123 bytes)
	I0617 12:22:56.416187  885555 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-685849/.minikube/key.pem, removing ...
	I0617 12:22:56.416191  885555 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-685849/.minikube/key.pem
	I0617 12:22:56.416213  885555 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-685849/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19084-685849/.minikube/key.pem (1679 bytes)
	I0617 12:22:56.416261  885555 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19084-685849/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19084-685849/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19084-685849/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-440919 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-440919]
	I0617 12:22:57.275042  885555 provision.go:177] copyRemoteCerts
	I0617 12:22:57.275243  885555 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0617 12:22:57.275332  885555 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-440919
	I0617 12:22:57.302688  885555 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33832 SSHKeyPath:/home/jenkins/minikube-integration/19084-685849/.minikube/machines/old-k8s-version-440919/id_rsa Username:docker}
	I0617 12:22:57.403157  885555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-685849/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0617 12:22:57.444950  885555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-685849/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0617 12:22:57.480087  885555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-685849/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0617 12:22:57.513135  885555 provision.go:87] duration metric: took 1.144981187s to configureAuth
	I0617 12:22:57.513210  885555 ubuntu.go:193] setting minikube options for container-runtime
	I0617 12:22:57.513454  885555 config.go:182] Loaded profile config "old-k8s-version-440919": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0617 12:22:57.513492  885555 machine.go:97] duration metric: took 4.633788352s to provisionDockerMachine
	I0617 12:22:57.513514  885555 start.go:293] postStartSetup for "old-k8s-version-440919" (driver="docker")
	I0617 12:22:57.513542  885555 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0617 12:22:57.513622  885555 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0617 12:22:57.513701  885555 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-440919
	I0617 12:22:57.553980  885555 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33832 SSHKeyPath:/home/jenkins/minikube-integration/19084-685849/.minikube/machines/old-k8s-version-440919/id_rsa Username:docker}
	I0617 12:22:57.670073  885555 ssh_runner.go:195] Run: cat /etc/os-release
	I0617 12:22:57.674373  885555 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0617 12:22:57.674406  885555 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0617 12:22:57.674417  885555 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0617 12:22:57.674424  885555 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0617 12:22:57.674434  885555 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-685849/.minikube/addons for local assets ...
	I0617 12:22:57.674494  885555 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-685849/.minikube/files for local assets ...
	I0617 12:22:57.674571  885555 filesync.go:149] local asset: /home/jenkins/minikube-integration/19084-685849/.minikube/files/etc/ssl/certs/6912422.pem -> 6912422.pem in /etc/ssl/certs
	I0617 12:22:57.674668  885555 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0617 12:22:57.689578  885555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-685849/.minikube/files/etc/ssl/certs/6912422.pem --> /etc/ssl/certs/6912422.pem (1708 bytes)
	I0617 12:22:57.729156  885555 start.go:296] duration metric: took 215.612153ms for postStartSetup
	I0617 12:22:57.729236  885555 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0617 12:22:57.729295  885555 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-440919
	I0617 12:22:57.753695  885555 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33832 SSHKeyPath:/home/jenkins/minikube-integration/19084-685849/.minikube/machines/old-k8s-version-440919/id_rsa Username:docker}
	I0617 12:22:57.850116  885555 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0617 12:22:57.857034  885555 fix.go:56] duration metric: took 5.406023725s for fixHost
	I0617 12:22:57.857056  885555 start.go:83] releasing machines lock for "old-k8s-version-440919", held for 5.406076959s
	I0617 12:22:57.857139  885555 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-440919
	I0617 12:22:57.888120  885555 ssh_runner.go:195] Run: cat /version.json
	I0617 12:22:57.888171  885555 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-440919
	I0617 12:22:57.888399  885555 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0617 12:22:57.888443  885555 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-440919
	I0617 12:22:57.949739  885555 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33832 SSHKeyPath:/home/jenkins/minikube-integration/19084-685849/.minikube/machines/old-k8s-version-440919/id_rsa Username:docker}
	I0617 12:22:57.963626  885555 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33832 SSHKeyPath:/home/jenkins/minikube-integration/19084-685849/.minikube/machines/old-k8s-version-440919/id_rsa Username:docker}
	I0617 12:22:58.056006  885555 ssh_runner.go:195] Run: systemctl --version
	I0617 12:22:58.208520  885555 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0617 12:22:58.213500  885555 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0617 12:22:58.238749  885555 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0617 12:22:58.238821  885555 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0617 12:22:58.249255  885555 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0617 12:22:58.249275  885555 start.go:494] detecting cgroup driver to use...
	I0617 12:22:58.249306  885555 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0617 12:22:58.249354  885555 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0617 12:22:58.269193  885555 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0617 12:22:58.282299  885555 docker.go:217] disabling cri-docker service (if available) ...
	I0617 12:22:58.282358  885555 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0617 12:22:58.297081  885555 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0617 12:22:58.309842  885555 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0617 12:22:58.436111  885555 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0617 12:22:58.546483  885555 docker.go:233] disabling docker service ...
	I0617 12:22:58.546615  885555 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0617 12:22:58.562816  885555 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0617 12:22:58.575682  885555 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0617 12:22:58.678375  885555 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0617 12:22:58.770103  885555 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0617 12:22:58.782410  885555 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0617 12:22:58.801136  885555 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0617 12:22:58.811321  885555 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0617 12:22:58.821196  885555 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0617 12:22:58.821259  885555 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0617 12:22:58.831326  885555 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0617 12:22:58.841541  885555 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0617 12:22:58.851293  885555 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0617 12:22:58.861126  885555 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0617 12:22:58.870740  885555 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0617 12:22:58.880940  885555 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0617 12:22:58.890041  885555 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0617 12:22:58.898806  885555 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 12:22:59.008218  885555 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0617 12:22:59.267619  885555 start.go:541] Will wait 60s for socket path /run/containerd/containerd.sock
	I0617 12:22:59.267734  885555 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0617 12:22:59.271521  885555 start.go:562] Will wait 60s for crictl version
	I0617 12:22:59.271631  885555 ssh_runner.go:195] Run: which crictl
	I0617 12:22:59.279392  885555 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0617 12:22:59.336653  885555 start.go:578] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.33
	RuntimeApiVersion:  v1
	I0617 12:22:59.336764  885555 ssh_runner.go:195] Run: containerd --version
	I0617 12:22:59.363119  885555 ssh_runner.go:195] Run: containerd --version
	I0617 12:22:59.394806  885555 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.6.33 ...
	I0617 12:22:59.396315  885555 cli_runner.go:164] Run: docker network inspect old-k8s-version-440919 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0617 12:22:59.412728  885555 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0617 12:22:59.416806  885555 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 12:22:59.428707  885555 kubeadm.go:877] updating cluster {Name:old-k8s-version-440919 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-440919 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0617 12:22:59.428840  885555 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0617 12:22:59.428906  885555 ssh_runner.go:195] Run: sudo crictl images --output json
	I0617 12:22:59.465910  885555 containerd.go:627] all images are preloaded for containerd runtime.
	I0617 12:22:59.465936  885555 containerd.go:534] Images already preloaded, skipping extraction
	I0617 12:22:59.465995  885555 ssh_runner.go:195] Run: sudo crictl images --output json
	I0617 12:22:59.509985  885555 containerd.go:627] all images are preloaded for containerd runtime.
	I0617 12:22:59.510007  885555 cache_images.go:84] Images are preloaded, skipping loading
	I0617 12:22:59.510015  885555 kubeadm.go:928] updating node { 192.168.85.2 8443 v1.20.0 containerd true true} ...
	I0617 12:22:59.510142  885555 kubeadm.go:940] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-440919 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-440919 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0617 12:22:59.510207  885555 ssh_runner.go:195] Run: sudo crictl info
	I0617 12:22:59.557207  885555 cni.go:84] Creating CNI manager for ""
	I0617 12:22:59.557232  885555 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0617 12:22:59.557243  885555 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0617 12:22:59.557262  885555 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-440919 NodeName:old-k8s-version-440919 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0617 12:22:59.557388  885555 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-440919"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0617 12:22:59.557457  885555 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0617 12:22:59.567120  885555 binaries.go:44] Found k8s binaries, skipping transfer
	I0617 12:22:59.567190  885555 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0617 12:22:59.576630  885555 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
	I0617 12:22:59.596283  885555 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0617 12:22:59.615120  885555 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
	I0617 12:22:59.644295  885555 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0617 12:22:59.648013  885555 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 12:22:59.658366  885555 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 12:22:59.773657  885555 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 12:22:59.788322  885555 certs.go:68] Setting up /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/old-k8s-version-440919 for IP: 192.168.85.2
	I0617 12:22:59.788344  885555 certs.go:194] generating shared ca certs ...
	I0617 12:22:59.788360  885555 certs.go:226] acquiring lock for ca certs: {Name:mkd182a8d082c6d0615c99aed3d4d2e0a9bb102c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:22:59.788514  885555 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19084-685849/.minikube/ca.key
	I0617 12:22:59.788568  885555 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19084-685849/.minikube/proxy-client-ca.key
	I0617 12:22:59.788580  885555 certs.go:256] generating profile certs ...
	I0617 12:22:59.788663  885555 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/old-k8s-version-440919/client.key
	I0617 12:22:59.788737  885555 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/old-k8s-version-440919/apiserver.key.21e5304d
	I0617 12:22:59.788790  885555 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/old-k8s-version-440919/proxy-client.key
	I0617 12:22:59.788906  885555 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-685849/.minikube/certs/691242.pem (1338 bytes)
	W0617 12:22:59.788941  885555 certs.go:480] ignoring /home/jenkins/minikube-integration/19084-685849/.minikube/certs/691242_empty.pem, impossibly tiny 0 bytes
	I0617 12:22:59.788953  885555 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-685849/.minikube/certs/ca-key.pem (1679 bytes)
	I0617 12:22:59.788981  885555 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-685849/.minikube/certs/ca.pem (1078 bytes)
	I0617 12:22:59.789011  885555 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-685849/.minikube/certs/cert.pem (1123 bytes)
	I0617 12:22:59.789035  885555 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-685849/.minikube/certs/key.pem (1679 bytes)
	I0617 12:22:59.789078  885555 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-685849/.minikube/files/etc/ssl/certs/6912422.pem (1708 bytes)
	I0617 12:22:59.789723  885555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-685849/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0617 12:22:59.818838  885555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-685849/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0617 12:22:59.848636  885555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-685849/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0617 12:22:59.882848  885555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-685849/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0617 12:22:59.909991  885555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/old-k8s-version-440919/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0617 12:22:59.955734  885555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/old-k8s-version-440919/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0617 12:23:00.002694  885555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/old-k8s-version-440919/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0617 12:23:00.149409  885555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/old-k8s-version-440919/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0617 12:23:00.185963  885555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-685849/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0617 12:23:00.262275  885555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-685849/.minikube/certs/691242.pem --> /usr/share/ca-certificates/691242.pem (1338 bytes)
	I0617 12:23:00.295357  885555 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-685849/.minikube/files/etc/ssl/certs/6912422.pem --> /usr/share/ca-certificates/6912422.pem (1708 bytes)
	I0617 12:23:00.330429  885555 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0617 12:23:00.361184  885555 ssh_runner.go:195] Run: openssl version
	I0617 12:23:00.368221  885555 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/691242.pem && ln -fs /usr/share/ca-certificates/691242.pem /etc/ssl/certs/691242.pem"
	I0617 12:23:00.380960  885555 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/691242.pem
	I0617 12:23:00.386459  885555 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 17 11:43 /usr/share/ca-certificates/691242.pem
	I0617 12:23:00.386628  885555 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/691242.pem
	I0617 12:23:00.396657  885555 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/691242.pem /etc/ssl/certs/51391683.0"
	I0617 12:23:00.416523  885555 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6912422.pem && ln -fs /usr/share/ca-certificates/6912422.pem /etc/ssl/certs/6912422.pem"
	I0617 12:23:00.430651  885555 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6912422.pem
	I0617 12:23:00.435065  885555 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 17 11:43 /usr/share/ca-certificates/6912422.pem
	I0617 12:23:00.435233  885555 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6912422.pem
	I0617 12:23:00.445511  885555 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6912422.pem /etc/ssl/certs/3ec20f2e.0"
	I0617 12:23:00.456413  885555 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0617 12:23:00.469347  885555 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:23:00.474268  885555 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 17 11:36 /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:23:00.474345  885555 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:23:00.482383  885555 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0617 12:23:00.492891  885555 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0617 12:23:00.497357  885555 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0617 12:23:00.504948  885555 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0617 12:23:00.513679  885555 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0617 12:23:00.521334  885555 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0617 12:23:00.528748  885555 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0617 12:23:00.535907  885555 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0617 12:23:00.542906  885555 kubeadm.go:391] StartCluster: {Name:old-k8s-version-440919 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-440919 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 12:23:00.543011  885555 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0617 12:23:00.543079  885555 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0617 12:23:00.605249  885555 cri.go:89] found id: "69a919aa5c14ae21855b7a11063b8873bc124cd369a2569e33734aa5edc4b63c"
	I0617 12:23:00.605279  885555 cri.go:89] found id: "f315a7ec7452781d6412306a4aa3e993b47b29eaccf0203b0834fc2aeb329242"
	I0617 12:23:00.605285  885555 cri.go:89] found id: "18a872f6ea8e1a79fa9931e5e3cb240a4d1d40263325f9d59898e7a21b7db918"
	I0617 12:23:00.605288  885555 cri.go:89] found id: "c04cf172cf8b22a2b419169621cc8909b6cc7170c752536e945bf6b4b01bcca0"
	I0617 12:23:00.605294  885555 cri.go:89] found id: "33b3439c7183458694ca56c7f6e9260911d8c14a7c86c0469b8dc03f505deb0f"
	I0617 12:23:00.605298  885555 cri.go:89] found id: "d51ff90b464a07bbc7028e9d8452454acff9356047d6f3ccfd9837fad4344e0d"
	I0617 12:23:00.605301  885555 cri.go:89] found id: "eeba0dd857ff09e5969b3bcf37ad51875f7624dbde1d304a3dbbade1d64fbf06"
	I0617 12:23:00.605304  885555 cri.go:89] found id: "93248ad0d3571533f5f55b355330f125d3756357d5377ca4b5685d1541bf3ea8"
	I0617 12:23:00.605307  885555 cri.go:89] found id: ""
	I0617 12:23:00.605360  885555 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0617 12:23:00.617459  885555 cri.go:116] JSON = null
	W0617 12:23:00.617510  885555 kubeadm.go:398] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0617 12:23:00.617586  885555 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0617 12:23:00.626573  885555 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0617 12:23:00.626594  885555 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0617 12:23:00.626600  885555 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0617 12:23:00.626654  885555 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0617 12:23:00.634942  885555 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0617 12:23:00.635399  885555 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-440919" does not appear in /home/jenkins/minikube-integration/19084-685849/kubeconfig
	I0617 12:23:00.635542  885555 kubeconfig.go:62] /home/jenkins/minikube-integration/19084-685849/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-440919" cluster setting kubeconfig missing "old-k8s-version-440919" context setting]
	I0617 12:23:00.635833  885555 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-685849/kubeconfig: {Name:mk0f1db8295cd0d3b8a0428491dac563579b7b2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:23:00.637089  885555 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0617 12:23:00.645932  885555 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.85.2
	I0617 12:23:00.645965  885555 kubeadm.go:591] duration metric: took 19.359787ms to restartPrimaryControlPlane
	I0617 12:23:00.645976  885555 kubeadm.go:393] duration metric: took 103.080133ms to StartCluster
	I0617 12:23:00.645991  885555 settings.go:142] acquiring lock: {Name:mk2a85dcb9c00537cffe742aea475ca7d2cf09a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:23:00.646065  885555 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19084-685849/kubeconfig
	I0617 12:23:00.646678  885555 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-685849/kubeconfig: {Name:mk0f1db8295cd0d3b8a0428491dac563579b7b2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:23:00.646876  885555 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0617 12:23:00.649130  885555 out.go:177] * Verifying Kubernetes components...
	I0617 12:23:00.647165  885555 config.go:182] Loaded profile config "old-k8s-version-440919": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0617 12:23:00.647176  885555 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0617 12:23:00.650884  885555 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-440919"
	I0617 12:23:00.650926  885555 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-440919"
	W0617 12:23:00.650939  885555 addons.go:243] addon storage-provisioner should already be in state true
	I0617 12:23:00.650971  885555 host.go:66] Checking if "old-k8s-version-440919" exists ...
	I0617 12:23:00.651456  885555 cli_runner.go:164] Run: docker container inspect old-k8s-version-440919 --format={{.State.Status}}
	I0617 12:23:00.651605  885555 addons.go:69] Setting dashboard=true in profile "old-k8s-version-440919"
	I0617 12:23:00.651632  885555 addons.go:234] Setting addon dashboard=true in "old-k8s-version-440919"
	W0617 12:23:00.651642  885555 addons.go:243] addon dashboard should already be in state true
	I0617 12:23:00.651666  885555 host.go:66] Checking if "old-k8s-version-440919" exists ...
	I0617 12:23:00.652031  885555 cli_runner.go:164] Run: docker container inspect old-k8s-version-440919 --format={{.State.Status}}
	I0617 12:23:00.652319  885555 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-440919"
	I0617 12:23:00.652352  885555 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-440919"
	I0617 12:23:00.652580  885555 cli_runner.go:164] Run: docker container inspect old-k8s-version-440919 --format={{.State.Status}}
	I0617 12:23:00.652836  885555 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 12:23:00.653276  885555 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-440919"
	I0617 12:23:00.653309  885555 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-440919"
	W0617 12:23:00.653316  885555 addons.go:243] addon metrics-server should already be in state true
	I0617 12:23:00.653339  885555 host.go:66] Checking if "old-k8s-version-440919" exists ...
	I0617 12:23:00.653708  885555 cli_runner.go:164] Run: docker container inspect old-k8s-version-440919 --format={{.State.Status}}
	I0617 12:23:00.700792  885555 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 12:23:00.706831  885555 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0617 12:23:00.706854  885555 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0617 12:23:00.706925  885555 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-440919
	I0617 12:23:00.709245  885555 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0617 12:23:00.710890  885555 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0617 12:23:00.710918  885555 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0617 12:23:00.710988  885555 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-440919
	I0617 12:23:00.717906  885555 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0617 12:23:00.719945  885555 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0617 12:23:00.721957  885555 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0617 12:23:00.721978  885555 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0617 12:23:00.722051  885555 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-440919
	I0617 12:23:00.736704  885555 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-440919"
	W0617 12:23:00.736726  885555 addons.go:243] addon default-storageclass should already be in state true
	I0617 12:23:00.736753  885555 host.go:66] Checking if "old-k8s-version-440919" exists ...
	I0617 12:23:00.737171  885555 cli_runner.go:164] Run: docker container inspect old-k8s-version-440919 --format={{.State.Status}}
	I0617 12:23:00.748911  885555 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33832 SSHKeyPath:/home/jenkins/minikube-integration/19084-685849/.minikube/machines/old-k8s-version-440919/id_rsa Username:docker}
	I0617 12:23:00.772636  885555 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33832 SSHKeyPath:/home/jenkins/minikube-integration/19084-685849/.minikube/machines/old-k8s-version-440919/id_rsa Username:docker}
	I0617 12:23:00.792526  885555 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33832 SSHKeyPath:/home/jenkins/minikube-integration/19084-685849/.minikube/machines/old-k8s-version-440919/id_rsa Username:docker}
	I0617 12:23:00.811864  885555 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0617 12:23:00.811893  885555 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0617 12:23:00.811967  885555 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-440919
	I0617 12:23:00.839041  885555 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33832 SSHKeyPath:/home/jenkins/minikube-integration/19084-685849/.minikube/machines/old-k8s-version-440919/id_rsa Username:docker}
	I0617 12:23:00.887888  885555 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 12:23:00.932884  885555 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-440919" to be "Ready" ...
	I0617 12:23:00.973216  885555 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0617 12:23:00.973242  885555 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0617 12:23:00.997464  885555 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0617 12:23:01.018880  885555 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0617 12:23:01.018907  885555 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0617 12:23:01.050661  885555 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0617 12:23:01.050692  885555 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0617 12:23:01.067089  885555 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0617 12:23:01.067135  885555 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0617 12:23:01.089667  885555 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0617 12:23:01.119825  885555 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0617 12:23:01.119850  885555 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0617 12:23:01.156488  885555 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0617 12:23:01.156530  885555 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0617 12:23:01.234950  885555 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0617 12:23:01.234989  885555 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0617 12:23:01.294576  885555 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0617 12:23:01.294603  885555 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	W0617 12:23:01.301164  885555 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0617 12:23:01.301199  885555 retry.go:31] will retry after 262.625154ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0617 12:23:01.333378  885555 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0617 12:23:01.333403  885555 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	W0617 12:23:01.355492  885555 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0617 12:23:01.355524  885555 retry.go:31] will retry after 369.495821ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0617 12:23:01.362165  885555 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0617 12:23:01.391889  885555 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0617 12:23:01.391937  885555 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0617 12:23:01.452723  885555 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0617 12:23:01.452750  885555 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0617 12:23:01.518447  885555 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0617 12:23:01.518484  885555 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	W0617 12:23:01.520554  885555 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0617 12:23:01.520585  885555 retry.go:31] will retry after 308.262317ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0617 12:23:01.543208  885555 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0617 12:23:01.564458  885555 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0617 12:23:01.708393  885555 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0617 12:23:01.708427  885555 retry.go:31] will retry after 219.585766ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0617 12:23:01.725708  885555 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0617 12:23:01.767187  885555 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0617 12:23:01.767222  885555 retry.go:31] will retry after 355.659ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0617 12:23:01.829472  885555 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0617 12:23:01.864789  885555 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0617 12:23:01.864821  885555 retry.go:31] will retry after 554.018825ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0617 12:23:01.929141  885555 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0617 12:23:01.955548  885555 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0617 12:23:01.955581  885555 retry.go:31] will retry after 206.664436ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0617 12:23:02.036742  885555 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0617 12:23:02.036776  885555 retry.go:31] will retry after 356.35313ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0617 12:23:02.123312  885555 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0617 12:23:02.163154  885555 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0617 12:23:02.284556  885555 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0617 12:23:02.284591  885555 retry.go:31] will retry after 458.919832ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0617 12:23:02.313653  885555 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0617 12:23:02.313686  885555 retry.go:31] will retry after 758.797113ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0617 12:23:02.394051  885555 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0617 12:23:02.419419  885555 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0617 12:23:02.570661  885555 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0617 12:23:02.570739  885555 retry.go:31] will retry after 393.760382ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0617 12:23:02.605357  885555 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0617 12:23:02.605434  885555 retry.go:31] will retry after 488.024513ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0617 12:23:02.744634  885555 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0617 12:23:02.843559  885555 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0617 12:23:02.843642  885555 retry.go:31] will retry after 913.517574ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0617 12:23:02.933868  885555 node_ready.go:53] error getting node "old-k8s-version-440919": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-440919": dial tcp 192.168.85.2:8443: connect: connection refused
	I0617 12:23:02.965131  885555 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0617 12:23:03.061320  885555 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0617 12:23:03.061352  885555 retry.go:31] will retry after 552.821545ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0617 12:23:03.073687  885555 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0617 12:23:03.094243  885555 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0617 12:23:03.214822  885555 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0617 12:23:03.214856  885555 retry.go:31] will retry after 749.035633ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0617 12:23:03.262218  885555 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0617 12:23:03.262250  885555 retry.go:31] will retry after 791.767955ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0617 12:23:03.615205  885555 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0617 12:23:03.719767  885555 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0617 12:23:03.719800  885555 retry.go:31] will retry after 899.601562ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0617 12:23:03.757921  885555 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0617 12:23:03.847487  885555 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0617 12:23:03.847516  885555 retry.go:31] will retry after 1.829224213s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0617 12:23:03.964330  885555 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0617 12:23:04.054731  885555 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0617 12:23:04.064671  885555 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0617 12:23:04.064700  885555 retry.go:31] will retry after 1.610283958s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0617 12:23:04.156151  885555 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0617 12:23:04.156206  885555 retry.go:31] will retry after 1.129164257s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0617 12:23:04.620015  885555 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0617 12:23:04.711938  885555 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0617 12:23:04.711973  885555 retry.go:31] will retry after 2.254665233s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0617 12:23:05.285655  885555 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0617 12:23:05.376020  885555 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0617 12:23:05.376055  885555 retry.go:31] will retry after 1.493764328s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0617 12:23:05.433532  885555 node_ready.go:53] error getting node "old-k8s-version-440919": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-440919": dial tcp 192.168.85.2:8443: connect: connection refused
	I0617 12:23:05.676019  885555 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0617 12:23:05.677140  885555 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0617 12:23:05.774513  885555 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0617 12:23:05.774543  885555 retry.go:31] will retry after 2.545456821s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0617 12:23:05.792487  885555 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0617 12:23:05.792519  885555 retry.go:31] will retry after 2.180979358s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0617 12:23:06.870090  885555 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0617 12:23:06.967399  885555 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0617 12:23:06.967449  885555 retry.go:31] will retry after 2.475150179s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0617 12:23:06.967626  885555 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0617 12:23:07.074930  885555 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0617 12:23:07.074968  885555 retry.go:31] will retry after 2.06515887s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0617 12:23:07.434096  885555 node_ready.go:53] error getting node "old-k8s-version-440919": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-440919": dial tcp 192.168.85.2:8443: connect: connection refused
	I0617 12:23:07.974317  885555 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0617 12:23:08.111583  885555 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0617 12:23:08.111614  885555 retry.go:31] will retry after 3.992785891s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0617 12:23:08.320800  885555 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0617 12:23:08.434152  885555 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0617 12:23:08.434184  885555 retry.go:31] will retry after 2.510778382s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0617 12:23:09.140703  885555 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0617 12:23:09.351810  885555 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0617 12:23:09.351852  885555 retry.go:31] will retry after 6.233994395s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0617 12:23:09.443638  885555 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0617 12:23:10.945240  885555 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0617 12:23:12.104693  885555 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0617 12:23:15.586854  885555 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0617 12:23:19.551802  885555 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (10.108126917s)
	W0617 12:23:19.551833  885555 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	I0617 12:23:19.551851  885555 retry.go:31] will retry after 2.83542669s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	I0617 12:23:19.934289  885555 node_ready.go:53] error getting node "old-k8s-version-440919": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-440919": net/http: TLS handshake timeout
	I0617 12:23:20.634012  885555 node_ready.go:49] node "old-k8s-version-440919" has status "Ready":"True"
	I0617 12:23:20.634040  885555 node_ready.go:38] duration metric: took 19.701113166s for node "old-k8s-version-440919" to be "Ready" ...
	I0617 12:23:20.634050  885555 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 12:23:20.962018  885555 pod_ready.go:78] waiting up to 6m0s for pod "coredns-74ff55c5b-z5m4s" in "kube-system" namespace to be "Ready" ...
	I0617 12:23:21.066823  885555 pod_ready.go:92] pod "coredns-74ff55c5b-z5m4s" in "kube-system" namespace has status "Ready":"True"
	I0617 12:23:21.066902  885555 pod_ready.go:81] duration metric: took 104.789501ms for pod "coredns-74ff55c5b-z5m4s" in "kube-system" namespace to be "Ready" ...
	I0617 12:23:21.066930  885555 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-440919" in "kube-system" namespace to be "Ready" ...
	I0617 12:23:21.109961  885555 pod_ready.go:92] pod "etcd-old-k8s-version-440919" in "kube-system" namespace has status "Ready":"True"
	I0617 12:23:21.110036  885555 pod_ready.go:81] duration metric: took 43.083933ms for pod "etcd-old-k8s-version-440919" in "kube-system" namespace to be "Ready" ...
	I0617 12:23:21.110079  885555 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-440919" in "kube-system" namespace to be "Ready" ...
	I0617 12:23:21.150078  885555 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-440919" in "kube-system" namespace has status "Ready":"True"
	I0617 12:23:21.150154  885555 pod_ready.go:81] duration metric: took 40.04842ms for pod "kube-apiserver-old-k8s-version-440919" in "kube-system" namespace to be "Ready" ...
	I0617 12:23:21.150197  885555 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-440919" in "kube-system" namespace to be "Ready" ...
	I0617 12:23:22.388399  885555 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0617 12:23:22.749480  885555 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (11.804201793s)
	I0617 12:23:22.749620  885555 addons.go:475] Verifying addon metrics-server=true in "old-k8s-version-440919"
	I0617 12:23:22.749554  885555 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.644834613s)
	I0617 12:23:23.168731  885555 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-440919" in "kube-system" namespace has status "Ready":"False"
	I0617 12:23:23.242905  885555 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.655995452s)
	I0617 12:23:23.245161  885555 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-440919 addons enable metrics-server
	
	I0617 12:23:23.710017  885555 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (1.321579096s)
	I0617 12:23:23.719721  885555 out.go:177] * Enabled addons: metrics-server, storage-provisioner, dashboard, default-storageclass
	I0617 12:23:23.721542  885555 addons.go:510] duration metric: took 23.074347203s for enable addons: enabled=[metrics-server storage-provisioner dashboard default-storageclass]
	I0617 12:23:25.657368  885555 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-440919" in "kube-system" namespace has status "Ready":"False"
	I0617 12:23:27.661200  885555 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-440919" in "kube-system" namespace has status "Ready":"False"
	I0617 12:23:30.159406  885555 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-440919" in "kube-system" namespace has status "Ready":"False"
	I0617 12:23:32.657831  885555 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-440919" in "kube-system" namespace has status "Ready":"False"
	I0617 12:23:35.157382  885555 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-440919" in "kube-system" namespace has status "Ready":"False"
	I0617 12:23:37.157473  885555 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-440919" in "kube-system" namespace has status "Ready":"False"
	I0617 12:23:39.656799  885555 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-440919" in "kube-system" namespace has status "Ready":"False"
	I0617 12:23:42.162665  885555 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-440919" in "kube-system" namespace has status "Ready":"False"
	I0617 12:23:44.658925  885555 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-440919" in "kube-system" namespace has status "Ready":"False"
	I0617 12:23:46.662748  885555 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-440919" in "kube-system" namespace has status "Ready":"False"
	I0617 12:23:49.162455  885555 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-440919" in "kube-system" namespace has status "Ready":"False"
	I0617 12:23:51.662288  885555 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-440919" in "kube-system" namespace has status "Ready":"False"
	I0617 12:23:54.156174  885555 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-440919" in "kube-system" namespace has status "Ready":"False"
	I0617 12:23:56.157853  885555 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-440919" in "kube-system" namespace has status "Ready":"False"
	I0617 12:23:58.657543  885555 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-440919" in "kube-system" namespace has status "Ready":"False"
	I0617 12:24:01.157809  885555 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-440919" in "kube-system" namespace has status "Ready":"False"
	I0617 12:24:03.656626  885555 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-440919" in "kube-system" namespace has status "Ready":"False"
	I0617 12:24:05.657261  885555 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-440919" in "kube-system" namespace has status "Ready":"False"
	I0617 12:24:08.156475  885555 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-440919" in "kube-system" namespace has status "Ready":"False"
	I0617 12:24:10.157691  885555 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-440919" in "kube-system" namespace has status "Ready":"False"
	I0617 12:24:12.660101  885555 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-440919" in "kube-system" namespace has status "Ready":"False"
	I0617 12:24:15.157771  885555 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-440919" in "kube-system" namespace has status "Ready":"False"
	I0617 12:24:17.161281  885555 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-440919" in "kube-system" namespace has status "Ready":"False"
	I0617 12:24:19.656034  885555 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-440919" in "kube-system" namespace has status "Ready":"False"
	I0617 12:24:21.656575  885555 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-440919" in "kube-system" namespace has status "Ready":"False"
	I0617 12:24:23.656668  885555 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-440919" in "kube-system" namespace has status "Ready":"False"
	I0617 12:24:25.657525  885555 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-440919" in "kube-system" namespace has status "Ready":"False"
	I0617 12:24:27.657455  885555 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-440919" in "kube-system" namespace has status "Ready":"True"
	I0617 12:24:27.657480  885555 pod_ready.go:81] duration metric: took 1m6.507261232s for pod "kube-controller-manager-old-k8s-version-440919" in "kube-system" namespace to be "Ready" ...
	I0617 12:24:27.657492  885555 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6cbbp" in "kube-system" namespace to be "Ready" ...
	I0617 12:24:27.663106  885555 pod_ready.go:92] pod "kube-proxy-6cbbp" in "kube-system" namespace has status "Ready":"True"
	I0617 12:24:27.663133  885555 pod_ready.go:81] duration metric: took 5.634104ms for pod "kube-proxy-6cbbp" in "kube-system" namespace to be "Ready" ...
	I0617 12:24:27.663144  885555 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-440919" in "kube-system" namespace to be "Ready" ...
	I0617 12:24:29.668964  885555 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-440919" in "kube-system" namespace has status "Ready":"False"
	I0617 12:24:31.670666  885555 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-440919" in "kube-system" namespace has status "Ready":"False"
	I0617 12:24:34.189725  885555 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-440919" in "kube-system" namespace has status "Ready":"False"
	I0617 12:24:36.670390  885555 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-440919" in "kube-system" namespace has status "Ready":"False"
	I0617 12:24:38.670551  885555 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-440919" in "kube-system" namespace has status "Ready":"False"
	I0617 12:24:40.670731  885555 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-440919" in "kube-system" namespace has status "Ready":"False"
	I0617 12:24:43.169289  885555 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-440919" in "kube-system" namespace has status "Ready":"False"
	I0617 12:24:45.170728  885555 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-440919" in "kube-system" namespace has status "Ready":"False"
	I0617 12:24:47.672178  885555 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-440919" in "kube-system" namespace has status "Ready":"False"
	I0617 12:24:49.670525  885555 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-440919" in "kube-system" namespace has status "Ready":"True"
	I0617 12:24:49.670549  885555 pod_ready.go:81] duration metric: took 22.007397004s for pod "kube-scheduler-old-k8s-version-440919" in "kube-system" namespace to be "Ready" ...
	I0617 12:24:49.670560  885555 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace to be "Ready" ...
	I0617 12:24:51.676638  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:24:53.677637  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:24:55.678593  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:24:57.680006  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:00.227719  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:02.676617  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:04.677161  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:06.677283  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:08.677787  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:11.177266  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:13.178144  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:15.677598  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:18.176235  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:20.179707  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:22.676872  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:24.677972  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:26.679201  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:29.177348  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:31.178088  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:33.179700  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:35.677468  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:38.177126  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:40.676371  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:42.677310  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:45.177712  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:47.178194  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:49.676614  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:51.676732  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:53.677279  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:56.177048  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:58.677258  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:26:00.678239  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:26:03.177762  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:26:05.678227  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:26:08.177217  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:26:10.177299  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:26:12.676993  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:26:14.677092  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:26:16.677521  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:26:19.188232  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:26:21.676157  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:26:23.677286  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:26:25.677431  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:26:27.684502  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:26:30.177981  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:26:32.178527  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:26:34.676710  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:26:36.677503  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:26:39.178172  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:26:41.676928  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:26:44.177512  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:26:46.178137  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:26:48.677227  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:26:51.176960  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:26:53.185227  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:26:55.676741  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:26:57.677237  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:00.242260  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:02.676826  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:05.177893  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:07.178330  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:09.677066  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:12.176887  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:14.177719  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:16.677603  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:19.177348  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:21.676735  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:23.682194  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:26.177222  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:28.177272  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:30.177377  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:32.178204  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:34.676512  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:37.177445  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:39.177612  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:41.178613  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:43.676535  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:45.677032  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:47.677406  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:50.177541  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:52.677215  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:55.177797  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:57.179007  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:59.676566  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:28:01.677792  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:28:04.176440  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:28:06.176925  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:28:08.176996  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:28:10.177265  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:28:12.177659  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:28:14.675610  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:28:16.676935  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:28:19.177812  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:28:21.677036  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:28:23.677547  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:28:26.177720  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:28:28.676925  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:28:31.177302  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:28:33.677301  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:28:35.677900  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:28:38.177641  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:28:40.178592  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:28:42.676558  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:28:45.179191  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:28:47.683635  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:28:49.676757  885555 pod_ready.go:81] duration metric: took 4m0.006182587s for pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace to be "Ready" ...
	E0617 12:28:49.676783  885555 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0617 12:28:49.676792  885555 pod_ready.go:38] duration metric: took 5m29.042731503s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 12:28:49.676807  885555 api_server.go:52] waiting for apiserver process to appear ...
	I0617 12:28:49.676850  885555 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:28:49.676918  885555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:28:49.719861  885555 cri.go:89] found id: "4c8b05cc53338f5377be14688172756dd02f7989ad55761a58b0f4c2aa83b6c5"
	I0617 12:28:49.719881  885555 cri.go:89] found id: "93248ad0d3571533f5f55b355330f125d3756357d5377ca4b5685d1541bf3ea8"
	I0617 12:28:49.719886  885555 cri.go:89] found id: ""
	I0617 12:28:49.719893  885555 logs.go:276] 2 containers: [4c8b05cc53338f5377be14688172756dd02f7989ad55761a58b0f4c2aa83b6c5 93248ad0d3571533f5f55b355330f125d3756357d5377ca4b5685d1541bf3ea8]
	I0617 12:28:49.719948  885555 ssh_runner.go:195] Run: which crictl
	I0617 12:28:49.723530  885555 ssh_runner.go:195] Run: which crictl
	I0617 12:28:49.727012  885555 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0617 12:28:49.727086  885555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:28:49.767901  885555 cri.go:89] found id: "5ae6ec941dc2fa5dddb29c886ab412ba333c98fb902b2443b1d6d06aa68dc55a"
	I0617 12:28:49.767923  885555 cri.go:89] found id: "d51ff90b464a07bbc7028e9d8452454acff9356047d6f3ccfd9837fad4344e0d"
	I0617 12:28:49.767929  885555 cri.go:89] found id: ""
	I0617 12:28:49.767936  885555 logs.go:276] 2 containers: [5ae6ec941dc2fa5dddb29c886ab412ba333c98fb902b2443b1d6d06aa68dc55a d51ff90b464a07bbc7028e9d8452454acff9356047d6f3ccfd9837fad4344e0d]
	I0617 12:28:49.768003  885555 ssh_runner.go:195] Run: which crictl
	I0617 12:28:49.771901  885555 ssh_runner.go:195] Run: which crictl
	I0617 12:28:49.775152  885555 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0617 12:28:49.775248  885555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:28:49.815564  885555 cri.go:89] found id: "c28f4777836661aecd3fac7733ba84292508f8e09940637ec697a14b608e8b21"
	I0617 12:28:49.815588  885555 cri.go:89] found id: "69a919aa5c14ae21855b7a11063b8873bc124cd369a2569e33734aa5edc4b63c"
	I0617 12:28:49.815593  885555 cri.go:89] found id: ""
	I0617 12:28:49.815601  885555 logs.go:276] 2 containers: [c28f4777836661aecd3fac7733ba84292508f8e09940637ec697a14b608e8b21 69a919aa5c14ae21855b7a11063b8873bc124cd369a2569e33734aa5edc4b63c]
	I0617 12:28:49.815665  885555 ssh_runner.go:195] Run: which crictl
	I0617 12:28:49.819083  885555 ssh_runner.go:195] Run: which crictl
	I0617 12:28:49.822151  885555 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:28:49.822254  885555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:28:49.859763  885555 cri.go:89] found id: "5ea33e0403f502a14bee51b367aa2ab21c3f1c2e8def56f7e25073809a96c24e"
	I0617 12:28:49.859835  885555 cri.go:89] found id: "33b3439c7183458694ca56c7f6e9260911d8c14a7c86c0469b8dc03f505deb0f"
	I0617 12:28:49.859856  885555 cri.go:89] found id: ""
	I0617 12:28:49.859872  885555 logs.go:276] 2 containers: [5ea33e0403f502a14bee51b367aa2ab21c3f1c2e8def56f7e25073809a96c24e 33b3439c7183458694ca56c7f6e9260911d8c14a7c86c0469b8dc03f505deb0f]
	I0617 12:28:49.859933  885555 ssh_runner.go:195] Run: which crictl
	I0617 12:28:49.863286  885555 ssh_runner.go:195] Run: which crictl
	I0617 12:28:49.866606  885555 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:28:49.866686  885555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:28:49.904648  885555 cri.go:89] found id: "c39d788e9ddc5e61548640b61f4e70b4d0b0f5c9ffc4fa37603cf44274abcb4b"
	I0617 12:28:49.904671  885555 cri.go:89] found id: "c04cf172cf8b22a2b419169621cc8909b6cc7170c752536e945bf6b4b01bcca0"
	I0617 12:28:49.904676  885555 cri.go:89] found id: ""
	I0617 12:28:49.904683  885555 logs.go:276] 2 containers: [c39d788e9ddc5e61548640b61f4e70b4d0b0f5c9ffc4fa37603cf44274abcb4b c04cf172cf8b22a2b419169621cc8909b6cc7170c752536e945bf6b4b01bcca0]
	I0617 12:28:49.904741  885555 ssh_runner.go:195] Run: which crictl
	I0617 12:28:49.908394  885555 ssh_runner.go:195] Run: which crictl
	I0617 12:28:49.911662  885555 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:28:49.911730  885555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:28:49.950580  885555 cri.go:89] found id: "5d81177822de55491501d88ceb395813919c8619721c616af1c12bcc910eab24"
	I0617 12:28:49.950601  885555 cri.go:89] found id: "eeba0dd857ff09e5969b3bcf37ad51875f7624dbde1d304a3dbbade1d64fbf06"
	I0617 12:28:49.950606  885555 cri.go:89] found id: ""
	I0617 12:28:49.950614  885555 logs.go:276] 2 containers: [5d81177822de55491501d88ceb395813919c8619721c616af1c12bcc910eab24 eeba0dd857ff09e5969b3bcf37ad51875f7624dbde1d304a3dbbade1d64fbf06]
	I0617 12:28:49.950672  885555 ssh_runner.go:195] Run: which crictl
	I0617 12:28:49.954456  885555 ssh_runner.go:195] Run: which crictl
	I0617 12:28:49.958400  885555 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0617 12:28:49.958514  885555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:28:50.011808  885555 cri.go:89] found id: "8677bbcb8def4ca7d9e6734637430dbad8bd9a8def059c11b56ed4db17c04599"
	I0617 12:28:50.011884  885555 cri.go:89] found id: "f315a7ec7452781d6412306a4aa3e993b47b29eaccf0203b0834fc2aeb329242"
	I0617 12:28:50.011923  885555 cri.go:89] found id: ""
	I0617 12:28:50.011959  885555 logs.go:276] 2 containers: [8677bbcb8def4ca7d9e6734637430dbad8bd9a8def059c11b56ed4db17c04599 f315a7ec7452781d6412306a4aa3e993b47b29eaccf0203b0834fc2aeb329242]
	I0617 12:28:50.012049  885555 ssh_runner.go:195] Run: which crictl
	I0617 12:28:50.016362  885555 ssh_runner.go:195] Run: which crictl
	I0617 12:28:50.020466  885555 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:28:50.020545  885555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:28:50.067228  885555 cri.go:89] found id: "362a01bad45f4e4081cbbf8ee77edf080e86ad9cd464e744534abcc40393504f"
	I0617 12:28:50.067251  885555 cri.go:89] found id: ""
	I0617 12:28:50.067263  885555 logs.go:276] 1 containers: [362a01bad45f4e4081cbbf8ee77edf080e86ad9cd464e744534abcc40393504f]
	I0617 12:28:50.067372  885555 ssh_runner.go:195] Run: which crictl
	I0617 12:28:50.071279  885555 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0617 12:28:50.071383  885555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0617 12:28:50.113796  885555 cri.go:89] found id: "06a68f0d2052e9b0ee97c609a7bc16438fa8b8ff0a43a0157e802993142b7e0d"
	I0617 12:28:50.113820  885555 cri.go:89] found id: "2ece5f594edf5341f8da2c11dc5de2fba23ee9c0c50f9b2a45fcb718aef5173f"
	I0617 12:28:50.113825  885555 cri.go:89] found id: ""
	I0617 12:28:50.113832  885555 logs.go:276] 2 containers: [06a68f0d2052e9b0ee97c609a7bc16438fa8b8ff0a43a0157e802993142b7e0d 2ece5f594edf5341f8da2c11dc5de2fba23ee9c0c50f9b2a45fcb718aef5173f]
	I0617 12:28:50.113892  885555 ssh_runner.go:195] Run: which crictl
	I0617 12:28:50.117850  885555 ssh_runner.go:195] Run: which crictl
	I0617 12:28:50.121493  885555 logs.go:123] Gathering logs for coredns [69a919aa5c14ae21855b7a11063b8873bc124cd369a2569e33734aa5edc4b63c] ...
	I0617 12:28:50.121525  885555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 69a919aa5c14ae21855b7a11063b8873bc124cd369a2569e33734aa5edc4b63c"
	I0617 12:28:50.161831  885555 logs.go:123] Gathering logs for kube-scheduler [5ea33e0403f502a14bee51b367aa2ab21c3f1c2e8def56f7e25073809a96c24e] ...
	I0617 12:28:50.161903  885555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ea33e0403f502a14bee51b367aa2ab21c3f1c2e8def56f7e25073809a96c24e"
	I0617 12:28:50.213147  885555 logs.go:123] Gathering logs for kube-scheduler [33b3439c7183458694ca56c7f6e9260911d8c14a7c86c0469b8dc03f505deb0f] ...
	I0617 12:28:50.213181  885555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33b3439c7183458694ca56c7f6e9260911d8c14a7c86c0469b8dc03f505deb0f"
	I0617 12:28:50.272783  885555 logs.go:123] Gathering logs for kindnet [f315a7ec7452781d6412306a4aa3e993b47b29eaccf0203b0834fc2aeb329242] ...
	I0617 12:28:50.272813  885555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f315a7ec7452781d6412306a4aa3e993b47b29eaccf0203b0834fc2aeb329242"
	I0617 12:28:50.316068  885555 logs.go:123] Gathering logs for kubelet ...
	I0617 12:28:50.316096  885555 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0617 12:28:50.373058  885555 logs.go:138] Found kubelet problem: Jun 17 12:23:20 old-k8s-version-440919 kubelet[665]: E0617 12:23:20.511830     665 reflector.go:138] object-"kube-system"/"kindnet-token-vw9vb": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-vw9vb" is forbidden: User "system:node:old-k8s-version-440919" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-440919' and this object
	W0617 12:28:50.373986  885555 logs.go:138] Found kubelet problem: Jun 17 12:23:20 old-k8s-version-440919 kubelet[665]: E0617 12:23:20.588626     665 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-440919" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-440919' and this object
	W0617 12:28:50.374296  885555 logs.go:138] Found kubelet problem: Jun 17 12:23:20 old-k8s-version-440919 kubelet[665]: E0617 12:23:20.589644     665 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-440919" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-440919' and this object
	W0617 12:28:50.374545  885555 logs.go:138] Found kubelet problem: Jun 17 12:23:20 old-k8s-version-440919 kubelet[665]: E0617 12:23:20.589736     665 reflector.go:138] object-"kube-system"/"coredns-token-ppjrc": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-ppjrc" is forbidden: User "system:node:old-k8s-version-440919" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-440919' and this object
	W0617 12:28:50.374792  885555 logs.go:138] Found kubelet problem: Jun 17 12:23:20 old-k8s-version-440919 kubelet[665]: E0617 12:23:20.589810     665 reflector.go:138] object-"kube-system"/"kube-proxy-token-m6zml": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-m6zml" is forbidden: User "system:node:old-k8s-version-440919" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-440919' and this object
	W0617 12:28:50.375039  885555 logs.go:138] Found kubelet problem: Jun 17 12:23:20 old-k8s-version-440919 kubelet[665]: E0617 12:23:20.636545     665 reflector.go:138] object-"kube-system"/"metrics-server-token-jlld8": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-jlld8" is forbidden: User "system:node:old-k8s-version-440919" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-440919' and this object
	W0617 12:28:50.375291  885555 logs.go:138] Found kubelet problem: Jun 17 12:23:20 old-k8s-version-440919 kubelet[665]: E0617 12:23:20.636646     665 reflector.go:138] object-"kube-system"/"storage-provisioner-token-5zdhm": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-5zdhm" is forbidden: User "system:node:old-k8s-version-440919" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-440919' and this object
	W0617 12:28:50.375542  885555 logs.go:138] Found kubelet problem: Jun 17 12:23:20 old-k8s-version-440919 kubelet[665]: E0617 12:23:20.636709     665 reflector.go:138] object-"default"/"default-token-gzv8z": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-gzv8z" is forbidden: User "system:node:old-k8s-version-440919" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-440919' and this object
	W0617 12:28:50.388578  885555 logs.go:138] Found kubelet problem: Jun 17 12:23:23 old-k8s-version-440919 kubelet[665]: E0617 12:23:23.464647     665 pod_workers.go:191] Error syncing pod b1829b9f-9401-4c9b-a032-45c1d46ad46f ("metrics-server-9975d5f86-w7ck9_kube-system(b1829b9f-9401-4c9b-a032-45c1d46ad46f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0617 12:28:50.388812  885555 logs.go:138] Found kubelet problem: Jun 17 12:23:23 old-k8s-version-440919 kubelet[665]: E0617 12:23:23.740524     665 pod_workers.go:191] Error syncing pod b1829b9f-9401-4c9b-a032-45c1d46ad46f ("metrics-server-9975d5f86-w7ck9_kube-system(b1829b9f-9401-4c9b-a032-45c1d46ad46f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0617 12:28:50.392010  885555 logs.go:138] Found kubelet problem: Jun 17 12:23:38 old-k8s-version-440919 kubelet[665]: E0617 12:23:38.439222     665 pod_workers.go:191] Error syncing pod b1829b9f-9401-4c9b-a032-45c1d46ad46f ("metrics-server-9975d5f86-w7ck9_kube-system(b1829b9f-9401-4c9b-a032-45c1d46ad46f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0617 12:28:50.392888  885555 logs.go:138] Found kubelet problem: Jun 17 12:23:40 old-k8s-version-440919 kubelet[665]: E0617 12:23:40.373162     665 reflector.go:138] object-"kubernetes-dashboard"/"kubernetes-dashboard-token-84qpn": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kubernetes-dashboard-token-84qpn" is forbidden: User "system:node:old-k8s-version-440919" cannot list resource "secrets" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'old-k8s-version-440919' and this object
	W0617 12:28:50.395821  885555 logs.go:138] Found kubelet problem: Jun 17 12:23:49 old-k8s-version-440919 kubelet[665]: E0617 12:23:49.414941     665 pod_workers.go:191] Error syncing pod b1829b9f-9401-4c9b-a032-45c1d46ad46f ("metrics-server-9975d5f86-w7ck9_kube-system(b1829b9f-9401-4c9b-a032-45c1d46ad46f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0617 12:28:50.396422  885555 logs.go:138] Found kubelet problem: Jun 17 12:23:54 old-k8s-version-440919 kubelet[665]: E0617 12:23:54.892167     665 pod_workers.go:191] Error syncing pod 1bf883c3-748b-4cdc-8387-8880e791d486 ("storage-provisioner_kube-system(1bf883c3-748b-4cdc-8387-8880e791d486)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(1bf883c3-748b-4cdc-8387-8880e791d486)"
	W0617 12:28:50.396777  885555 logs.go:138] Found kubelet problem: Jun 17 12:23:54 old-k8s-version-440919 kubelet[665]: E0617 12:23:54.896407     665 pod_workers.go:191] Error syncing pod 32bd42a3-275b-46d8-9681-6824dd924a9e ("dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"
	W0617 12:28:50.397288  885555 logs.go:138] Found kubelet problem: Jun 17 12:23:55 old-k8s-version-440919 kubelet[665]: E0617 12:23:55.902354     665 pod_workers.go:191] Error syncing pod 32bd42a3-275b-46d8-9681-6824dd924a9e ("dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"
	W0617 12:28:50.401230  885555 logs.go:138] Found kubelet problem: Jun 17 12:24:01 old-k8s-version-440919 kubelet[665]: E0617 12:24:01.423695     665 pod_workers.go:191] Error syncing pod b1829b9f-9401-4c9b-a032-45c1d46ad46f ("metrics-server-9975d5f86-w7ck9_kube-system(b1829b9f-9401-4c9b-a032-45c1d46ad46f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0617 12:28:50.401585  885555 logs.go:138] Found kubelet problem: Jun 17 12:24:02 old-k8s-version-440919 kubelet[665]: E0617 12:24:02.748089     665 pod_workers.go:191] Error syncing pod 32bd42a3-275b-46d8-9681-6824dd924a9e ("dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"
	W0617 12:28:50.401912  885555 logs.go:138] Found kubelet problem: Jun 17 12:24:12 old-k8s-version-440919 kubelet[665]: E0617 12:24:12.413178     665 pod_workers.go:191] Error syncing pod b1829b9f-9401-4c9b-a032-45c1d46ad46f ("metrics-server-9975d5f86-w7ck9_kube-system(b1829b9f-9401-4c9b-a032-45c1d46ad46f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0617 12:28:50.402525  885555 logs.go:138] Found kubelet problem: Jun 17 12:24:13 old-k8s-version-440919 kubelet[665]: E0617 12:24:13.949512     665 pod_workers.go:191] Error syncing pod 32bd42a3-275b-46d8-9681-6824dd924a9e ("dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"
	W0617 12:28:50.402860  885555 logs.go:138] Found kubelet problem: Jun 17 12:24:22 old-k8s-version-440919 kubelet[665]: E0617 12:24:22.748069     665 pod_workers.go:191] Error syncing pod 32bd42a3-275b-46d8-9681-6824dd924a9e ("dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"
	W0617 12:28:50.403045  885555 logs.go:138] Found kubelet problem: Jun 17 12:24:24 old-k8s-version-440919 kubelet[665]: E0617 12:24:24.416098     665 pod_workers.go:191] Error syncing pod b1829b9f-9401-4c9b-a032-45c1d46ad46f ("metrics-server-9975d5f86-w7ck9_kube-system(b1829b9f-9401-4c9b-a032-45c1d46ad46f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0617 12:28:50.403230  885555 logs.go:138] Found kubelet problem: Jun 17 12:24:36 old-k8s-version-440919 kubelet[665]: E0617 12:24:36.413401     665 pod_workers.go:191] Error syncing pod b1829b9f-9401-4c9b-a032-45c1d46ad46f ("metrics-server-9975d5f86-w7ck9_kube-system(b1829b9f-9401-4c9b-a032-45c1d46ad46f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0617 12:28:50.404395  885555 logs.go:138] Found kubelet problem: Jun 17 12:24:38 old-k8s-version-440919 kubelet[665]: E0617 12:24:38.002634     665 pod_workers.go:191] Error syncing pod 32bd42a3-275b-46d8-9681-6824dd924a9e ("dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"
	W0617 12:28:50.404759  885555 logs.go:138] Found kubelet problem: Jun 17 12:24:42 old-k8s-version-440919 kubelet[665]: E0617 12:24:42.748531     665 pod_workers.go:191] Error syncing pod 32bd42a3-275b-46d8-9681-6824dd924a9e ("dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"
	W0617 12:28:50.407251  885555 logs.go:138] Found kubelet problem: Jun 17 12:24:51 old-k8s-version-440919 kubelet[665]: E0617 12:24:51.421671     665 pod_workers.go:191] Error syncing pod b1829b9f-9401-4c9b-a032-45c1d46ad46f ("metrics-server-9975d5f86-w7ck9_kube-system(b1829b9f-9401-4c9b-a032-45c1d46ad46f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0617 12:28:50.408889  885555 logs.go:138] Found kubelet problem: Jun 17 12:24:57 old-k8s-version-440919 kubelet[665]: E0617 12:24:57.413303     665 pod_workers.go:191] Error syncing pod 32bd42a3-275b-46d8-9681-6824dd924a9e ("dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"
	W0617 12:28:50.409119  885555 logs.go:138] Found kubelet problem: Jun 17 12:25:02 old-k8s-version-440919 kubelet[665]: E0617 12:25:02.413257     665 pod_workers.go:191] Error syncing pod b1829b9f-9401-4c9b-a032-45c1d46ad46f ("metrics-server-9975d5f86-w7ck9_kube-system(b1829b9f-9401-4c9b-a032-45c1d46ad46f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0617 12:28:50.409473  885555 logs.go:138] Found kubelet problem: Jun 17 12:25:09 old-k8s-version-440919 kubelet[665]: E0617 12:25:09.412555     665 pod_workers.go:191] Error syncing pod 32bd42a3-275b-46d8-9681-6824dd924a9e ("dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"
	W0617 12:28:50.409664  885555 logs.go:138] Found kubelet problem: Jun 17 12:25:17 old-k8s-version-440919 kubelet[665]: E0617 12:25:17.412907     665 pod_workers.go:191] Error syncing pod b1829b9f-9401-4c9b-a032-45c1d46ad46f ("metrics-server-9975d5f86-w7ck9_kube-system(b1829b9f-9401-4c9b-a032-45c1d46ad46f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0617 12:28:50.410806  885555 logs.go:138] Found kubelet problem: Jun 17 12:25:25 old-k8s-version-440919 kubelet[665]: E0617 12:25:25.159131     665 pod_workers.go:191] Error syncing pod 32bd42a3-275b-46d8-9681-6824dd924a9e ("dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"
	W0617 12:28:50.411399  885555 logs.go:138] Found kubelet problem: Jun 17 12:25:32 old-k8s-version-440919 kubelet[665]: E0617 12:25:32.416049     665 pod_workers.go:191] Error syncing pod b1829b9f-9401-4c9b-a032-45c1d46ad46f ("metrics-server-9975d5f86-w7ck9_kube-system(b1829b9f-9401-4c9b-a032-45c1d46ad46f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0617 12:28:50.415883  885555 logs.go:138] Found kubelet problem: Jun 17 12:25:32 old-k8s-version-440919 kubelet[665]: E0617 12:25:32.748275     665 pod_workers.go:191] Error syncing pod 32bd42a3-275b-46d8-9681-6824dd924a9e ("dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"
	W0617 12:28:50.416257  885555 logs.go:138] Found kubelet problem: Jun 17 12:25:44 old-k8s-version-440919 kubelet[665]: E0617 12:25:44.413119     665 pod_workers.go:191] Error syncing pod 32bd42a3-275b-46d8-9681-6824dd924a9e ("dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"
	W0617 12:28:50.416477  885555 logs.go:138] Found kubelet problem: Jun 17 12:25:45 old-k8s-version-440919 kubelet[665]: E0617 12:25:45.413224     665 pod_workers.go:191] Error syncing pod b1829b9f-9401-4c9b-a032-45c1d46ad46f ("metrics-server-9975d5f86-w7ck9_kube-system(b1829b9f-9401-4c9b-a032-45c1d46ad46f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0617 12:28:50.416837  885555 logs.go:138] Found kubelet problem: Jun 17 12:25:59 old-k8s-version-440919 kubelet[665]: E0617 12:25:59.412465     665 pod_workers.go:191] Error syncing pod 32bd42a3-275b-46d8-9681-6824dd924a9e ("dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"
	W0617 12:28:50.417043  885555 logs.go:138] Found kubelet problem: Jun 17 12:26:00 old-k8s-version-440919 kubelet[665]: E0617 12:26:00.413122     665 pod_workers.go:191] Error syncing pod b1829b9f-9401-4c9b-a032-45c1d46ad46f ("metrics-server-9975d5f86-w7ck9_kube-system(b1829b9f-9401-4c9b-a032-45c1d46ad46f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0617 12:28:50.417393  885555 logs.go:138] Found kubelet problem: Jun 17 12:26:13 old-k8s-version-440919 kubelet[665]: E0617 12:26:13.412518     665 pod_workers.go:191] Error syncing pod 32bd42a3-275b-46d8-9681-6824dd924a9e ("dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"
	W0617 12:28:50.420247  885555 logs.go:138] Found kubelet problem: Jun 17 12:26:15 old-k8s-version-440919 kubelet[665]: E0617 12:26:15.420664     665 pod_workers.go:191] Error syncing pod b1829b9f-9401-4c9b-a032-45c1d46ad46f ("metrics-server-9975d5f86-w7ck9_kube-system(b1829b9f-9401-4c9b-a032-45c1d46ad46f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0617 12:28:50.420664  885555 logs.go:138] Found kubelet problem: Jun 17 12:26:27 old-k8s-version-440919 kubelet[665]: E0617 12:26:27.412560     665 pod_workers.go:191] Error syncing pod 32bd42a3-275b-46d8-9681-6824dd924a9e ("dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"
	W0617 12:28:50.420859  885555 logs.go:138] Found kubelet problem: Jun 17 12:26:30 old-k8s-version-440919 kubelet[665]: E0617 12:26:30.415779     665 pod_workers.go:191] Error syncing pod b1829b9f-9401-4c9b-a032-45c1d46ad46f ("metrics-server-9975d5f86-w7ck9_kube-system(b1829b9f-9401-4c9b-a032-45c1d46ad46f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0617 12:28:50.421191  885555 logs.go:138] Found kubelet problem: Jun 17 12:26:42 old-k8s-version-440919 kubelet[665]: E0617 12:26:42.416836     665 pod_workers.go:191] Error syncing pod 32bd42a3-275b-46d8-9681-6824dd924a9e ("dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"
	W0617 12:28:50.421435  885555 logs.go:138] Found kubelet problem: Jun 17 12:26:45 old-k8s-version-440919 kubelet[665]: E0617 12:26:45.413376     665 pod_workers.go:191] Error syncing pod b1829b9f-9401-4c9b-a032-45c1d46ad46f ("metrics-server-9975d5f86-w7ck9_kube-system(b1829b9f-9401-4c9b-a032-45c1d46ad46f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0617 12:28:50.422192  885555 logs.go:138] Found kubelet problem: Jun 17 12:26:56 old-k8s-version-440919 kubelet[665]: E0617 12:26:56.379496     665 pod_workers.go:191] Error syncing pod 32bd42a3-275b-46d8-9681-6824dd924a9e ("dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"
	W0617 12:28:50.422515  885555 logs.go:138] Found kubelet problem: Jun 17 12:26:56 old-k8s-version-440919 kubelet[665]: E0617 12:26:56.413956     665 pod_workers.go:191] Error syncing pod b1829b9f-9401-4c9b-a032-45c1d46ad46f ("metrics-server-9975d5f86-w7ck9_kube-system(b1829b9f-9401-4c9b-a032-45c1d46ad46f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0617 12:28:50.422873  885555 logs.go:138] Found kubelet problem: Jun 17 12:27:02 old-k8s-version-440919 kubelet[665]: E0617 12:27:02.748469     665 pod_workers.go:191] Error syncing pod 32bd42a3-275b-46d8-9681-6824dd924a9e ("dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"
	W0617 12:28:50.423104  885555 logs.go:138] Found kubelet problem: Jun 17 12:27:08 old-k8s-version-440919 kubelet[665]: E0617 12:27:08.418201     665 pod_workers.go:191] Error syncing pod b1829b9f-9401-4c9b-a032-45c1d46ad46f ("metrics-server-9975d5f86-w7ck9_kube-system(b1829b9f-9401-4c9b-a032-45c1d46ad46f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0617 12:28:50.423464  885555 logs.go:138] Found kubelet problem: Jun 17 12:27:15 old-k8s-version-440919 kubelet[665]: E0617 12:27:15.412573     665 pod_workers.go:191] Error syncing pod 32bd42a3-275b-46d8-9681-6824dd924a9e ("dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"
	W0617 12:28:50.423653  885555 logs.go:138] Found kubelet problem: Jun 17 12:27:20 old-k8s-version-440919 kubelet[665]: E0617 12:27:20.413089     665 pod_workers.go:191] Error syncing pod b1829b9f-9401-4c9b-a032-45c1d46ad46f ("metrics-server-9975d5f86-w7ck9_kube-system(b1829b9f-9401-4c9b-a032-45c1d46ad46f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0617 12:28:50.423984  885555 logs.go:138] Found kubelet problem: Jun 17 12:27:27 old-k8s-version-440919 kubelet[665]: E0617 12:27:27.412591     665 pod_workers.go:191] Error syncing pod 32bd42a3-275b-46d8-9681-6824dd924a9e ("dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"
	W0617 12:28:50.424174  885555 logs.go:138] Found kubelet problem: Jun 17 12:27:32 old-k8s-version-440919 kubelet[665]: E0617 12:27:32.413007     665 pod_workers.go:191] Error syncing pod b1829b9f-9401-4c9b-a032-45c1d46ad46f ("metrics-server-9975d5f86-w7ck9_kube-system(b1829b9f-9401-4c9b-a032-45c1d46ad46f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0617 12:28:50.424504  885555 logs.go:138] Found kubelet problem: Jun 17 12:27:41 old-k8s-version-440919 kubelet[665]: E0617 12:27:41.413008     665 pod_workers.go:191] Error syncing pod 32bd42a3-275b-46d8-9681-6824dd924a9e ("dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"
	W0617 12:28:50.424691  885555 logs.go:138] Found kubelet problem: Jun 17 12:27:44 old-k8s-version-440919 kubelet[665]: E0617 12:27:44.412980     665 pod_workers.go:191] Error syncing pod b1829b9f-9401-4c9b-a032-45c1d46ad46f ("metrics-server-9975d5f86-w7ck9_kube-system(b1829b9f-9401-4c9b-a032-45c1d46ad46f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0617 12:28:50.424877  885555 logs.go:138] Found kubelet problem: Jun 17 12:27:55 old-k8s-version-440919 kubelet[665]: E0617 12:27:55.412933     665 pod_workers.go:191] Error syncing pod b1829b9f-9401-4c9b-a032-45c1d46ad46f ("metrics-server-9975d5f86-w7ck9_kube-system(b1829b9f-9401-4c9b-a032-45c1d46ad46f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0617 12:28:50.425209  885555 logs.go:138] Found kubelet problem: Jun 17 12:27:56 old-k8s-version-440919 kubelet[665]: E0617 12:27:56.415878     665 pod_workers.go:191] Error syncing pod 32bd42a3-275b-46d8-9681-6824dd924a9e ("dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"
	W0617 12:28:50.425539  885555 logs.go:138] Found kubelet problem: Jun 17 12:28:07 old-k8s-version-440919 kubelet[665]: E0617 12:28:07.412568     665 pod_workers.go:191] Error syncing pod 32bd42a3-275b-46d8-9681-6824dd924a9e ("dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"
	W0617 12:28:50.425773  885555 logs.go:138] Found kubelet problem: Jun 17 12:28:09 old-k8s-version-440919 kubelet[665]: E0617 12:28:09.413101     665 pod_workers.go:191] Error syncing pod b1829b9f-9401-4c9b-a032-45c1d46ad46f ("metrics-server-9975d5f86-w7ck9_kube-system(b1829b9f-9401-4c9b-a032-45c1d46ad46f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0617 12:28:50.426124  885555 logs.go:138] Found kubelet problem: Jun 17 12:28:21 old-k8s-version-440919 kubelet[665]: E0617 12:28:21.412553     665 pod_workers.go:191] Error syncing pod 32bd42a3-275b-46d8-9681-6824dd924a9e ("dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"
	W0617 12:28:50.426317  885555 logs.go:138] Found kubelet problem: Jun 17 12:28:22 old-k8s-version-440919 kubelet[665]: E0617 12:28:22.412972     665 pod_workers.go:191] Error syncing pod b1829b9f-9401-4c9b-a032-45c1d46ad46f ("metrics-server-9975d5f86-w7ck9_kube-system(b1829b9f-9401-4c9b-a032-45c1d46ad46f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0617 12:28:50.426650  885555 logs.go:138] Found kubelet problem: Jun 17 12:28:32 old-k8s-version-440919 kubelet[665]: E0617 12:28:32.412994     665 pod_workers.go:191] Error syncing pod 32bd42a3-275b-46d8-9681-6824dd924a9e ("dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"
	W0617 12:28:50.426839  885555 logs.go:138] Found kubelet problem: Jun 17 12:28:37 old-k8s-version-440919 kubelet[665]: E0617 12:28:37.413026     665 pod_workers.go:191] Error syncing pod b1829b9f-9401-4c9b-a032-45c1d46ad46f ("metrics-server-9975d5f86-w7ck9_kube-system(b1829b9f-9401-4c9b-a032-45c1d46ad46f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0617 12:28:50.427171  885555 logs.go:138] Found kubelet problem: Jun 17 12:28:46 old-k8s-version-440919 kubelet[665]: E0617 12:28:46.412739     665 pod_workers.go:191] Error syncing pod 32bd42a3-275b-46d8-9681-6824dd924a9e ("dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"
	W0617 12:28:50.427359  885555 logs.go:138] Found kubelet problem: Jun 17 12:28:48 old-k8s-version-440919 kubelet[665]: E0617 12:28:48.413238     665 pod_workers.go:191] Error syncing pod b1829b9f-9401-4c9b-a032-45c1d46ad46f ("metrics-server-9975d5f86-w7ck9_kube-system(b1829b9f-9401-4c9b-a032-45c1d46ad46f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0617 12:28:50.427371  885555 logs.go:123] Gathering logs for kube-apiserver [4c8b05cc53338f5377be14688172756dd02f7989ad55761a58b0f4c2aa83b6c5] ...
	I0617 12:28:50.427386  885555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c8b05cc53338f5377be14688172756dd02f7989ad55761a58b0f4c2aa83b6c5"
	I0617 12:28:50.518286  885555 logs.go:123] Gathering logs for coredns [c28f4777836661aecd3fac7733ba84292508f8e09940637ec697a14b608e8b21] ...
	I0617 12:28:50.518323  885555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c28f4777836661aecd3fac7733ba84292508f8e09940637ec697a14b608e8b21"
	I0617 12:28:50.569465  885555 logs.go:123] Gathering logs for storage-provisioner [06a68f0d2052e9b0ee97c609a7bc16438fa8b8ff0a43a0157e802993142b7e0d] ...
	I0617 12:28:50.569499  885555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 06a68f0d2052e9b0ee97c609a7bc16438fa8b8ff0a43a0157e802993142b7e0d"
	I0617 12:28:50.616569  885555 logs.go:123] Gathering logs for kube-apiserver [93248ad0d3571533f5f55b355330f125d3756357d5377ca4b5685d1541bf3ea8] ...
	I0617 12:28:50.616598  885555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93248ad0d3571533f5f55b355330f125d3756357d5377ca4b5685d1541bf3ea8"
	I0617 12:28:50.717482  885555 logs.go:123] Gathering logs for kindnet [8677bbcb8def4ca7d9e6734637430dbad8bd9a8def059c11b56ed4db17c04599] ...
	I0617 12:28:50.718134  885555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8677bbcb8def4ca7d9e6734637430dbad8bd9a8def059c11b56ed4db17c04599"
	I0617 12:28:50.778810  885555 logs.go:123] Gathering logs for kubernetes-dashboard [362a01bad45f4e4081cbbf8ee77edf080e86ad9cd464e744534abcc40393504f] ...
	I0617 12:28:50.778909  885555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 362a01bad45f4e4081cbbf8ee77edf080e86ad9cd464e744534abcc40393504f"
	I0617 12:28:50.839775  885555 logs.go:123] Gathering logs for kube-proxy [c39d788e9ddc5e61548640b61f4e70b4d0b0f5c9ffc4fa37603cf44274abcb4b] ...
	I0617 12:28:50.839804  885555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c39d788e9ddc5e61548640b61f4e70b4d0b0f5c9ffc4fa37603cf44274abcb4b"
	I0617 12:28:50.942314  885555 logs.go:123] Gathering logs for kube-controller-manager [5d81177822de55491501d88ceb395813919c8619721c616af1c12bcc910eab24] ...
	I0617 12:28:50.942344  885555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d81177822de55491501d88ceb395813919c8619721c616af1c12bcc910eab24"
	I0617 12:28:51.055138  885555 logs.go:123] Gathering logs for dmesg ...
	I0617 12:28:51.055216  885555 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:28:51.085812  885555 logs.go:123] Gathering logs for etcd [5ae6ec941dc2fa5dddb29c886ab412ba333c98fb902b2443b1d6d06aa68dc55a] ...
	I0617 12:28:51.085897  885555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ae6ec941dc2fa5dddb29c886ab412ba333c98fb902b2443b1d6d06aa68dc55a"
	I0617 12:28:51.162029  885555 logs.go:123] Gathering logs for etcd [d51ff90b464a07bbc7028e9d8452454acff9356047d6f3ccfd9837fad4344e0d] ...
	I0617 12:28:51.162062  885555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d51ff90b464a07bbc7028e9d8452454acff9356047d6f3ccfd9837fad4344e0d"
	I0617 12:28:51.238485  885555 logs.go:123] Gathering logs for storage-provisioner [2ece5f594edf5341f8da2c11dc5de2fba23ee9c0c50f9b2a45fcb718aef5173f] ...
	I0617 12:28:51.238519  885555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ece5f594edf5341f8da2c11dc5de2fba23ee9c0c50f9b2a45fcb718aef5173f"
	I0617 12:28:51.292421  885555 logs.go:123] Gathering logs for containerd ...
	I0617 12:28:51.292450  885555 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0617 12:28:51.374150  885555 logs.go:123] Gathering logs for container status ...
	I0617 12:28:51.374186  885555 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:28:51.457123  885555 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:28:51.457153  885555 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 12:28:51.697650  885555 logs.go:123] Gathering logs for kube-proxy [c04cf172cf8b22a2b419169621cc8909b6cc7170c752536e945bf6b4b01bcca0] ...
	I0617 12:28:51.697684  885555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c04cf172cf8b22a2b419169621cc8909b6cc7170c752536e945bf6b4b01bcca0"
	I0617 12:28:51.750523  885555 logs.go:123] Gathering logs for kube-controller-manager [eeba0dd857ff09e5969b3bcf37ad51875f7624dbde1d304a3dbbade1d64fbf06] ...
	I0617 12:28:51.750557  885555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eeba0dd857ff09e5969b3bcf37ad51875f7624dbde1d304a3dbbade1d64fbf06"
	I0617 12:28:51.859712  885555 out.go:304] Setting ErrFile to fd 2...
	I0617 12:28:51.859745  885555 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0617 12:28:51.859810  885555 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0617 12:28:51.859828  885555 out.go:239]   Jun 17 12:28:22 old-k8s-version-440919 kubelet[665]: E0617 12:28:22.412972     665 pod_workers.go:191] Error syncing pod b1829b9f-9401-4c9b-a032-45c1d46ad46f ("metrics-server-9975d5f86-w7ck9_kube-system(b1829b9f-9401-4c9b-a032-45c1d46ad46f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Jun 17 12:28:22 old-k8s-version-440919 kubelet[665]: E0617 12:28:22.412972     665 pod_workers.go:191] Error syncing pod b1829b9f-9401-4c9b-a032-45c1d46ad46f ("metrics-server-9975d5f86-w7ck9_kube-system(b1829b9f-9401-4c9b-a032-45c1d46ad46f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0617 12:28:51.859846  885555 out.go:239]   Jun 17 12:28:32 old-k8s-version-440919 kubelet[665]: E0617 12:28:32.412994     665 pod_workers.go:191] Error syncing pod 32bd42a3-275b-46d8-9681-6824dd924a9e ("dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"
	  Jun 17 12:28:32 old-k8s-version-440919 kubelet[665]: E0617 12:28:32.412994     665 pod_workers.go:191] Error syncing pod 32bd42a3-275b-46d8-9681-6824dd924a9e ("dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"
	W0617 12:28:51.859855  885555 out.go:239]   Jun 17 12:28:37 old-k8s-version-440919 kubelet[665]: E0617 12:28:37.413026     665 pod_workers.go:191] Error syncing pod b1829b9f-9401-4c9b-a032-45c1d46ad46f ("metrics-server-9975d5f86-w7ck9_kube-system(b1829b9f-9401-4c9b-a032-45c1d46ad46f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Jun 17 12:28:37 old-k8s-version-440919 kubelet[665]: E0617 12:28:37.413026     665 pod_workers.go:191] Error syncing pod b1829b9f-9401-4c9b-a032-45c1d46ad46f ("metrics-server-9975d5f86-w7ck9_kube-system(b1829b9f-9401-4c9b-a032-45c1d46ad46f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0617 12:28:51.859862  885555 out.go:239]   Jun 17 12:28:46 old-k8s-version-440919 kubelet[665]: E0617 12:28:46.412739     665 pod_workers.go:191] Error syncing pod 32bd42a3-275b-46d8-9681-6824dd924a9e ("dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"
	  Jun 17 12:28:46 old-k8s-version-440919 kubelet[665]: E0617 12:28:46.412739     665 pod_workers.go:191] Error syncing pod 32bd42a3-275b-46d8-9681-6824dd924a9e ("dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"
	W0617 12:28:51.859872  885555 out.go:239]   Jun 17 12:28:48 old-k8s-version-440919 kubelet[665]: E0617 12:28:48.413238     665 pod_workers.go:191] Error syncing pod b1829b9f-9401-4c9b-a032-45c1d46ad46f ("metrics-server-9975d5f86-w7ck9_kube-system(b1829b9f-9401-4c9b-a032-45c1d46ad46f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Jun 17 12:28:48 old-k8s-version-440919 kubelet[665]: E0617 12:28:48.413238     665 pod_workers.go:191] Error syncing pod b1829b9f-9401-4c9b-a032-45c1d46ad46f ("metrics-server-9975d5f86-w7ck9_kube-system(b1829b9f-9401-4c9b-a032-45c1d46ad46f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0617 12:28:51.859882  885555 out.go:304] Setting ErrFile to fd 2...
	I0617 12:28:51.859888  885555 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 12:29:01.861302  885555 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:29:01.873993  885555 api_server.go:72] duration metric: took 6m1.227082782s to wait for apiserver process to appear ...
	I0617 12:29:01.874018  885555 api_server.go:88] waiting for apiserver healthz status ...
	I0617 12:29:01.876566  885555 out.go:177] 
	W0617 12:29:01.878398  885555 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: cluster wait timed out during healthz check
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: cluster wait timed out during healthz check
	W0617 12:29:01.878433  885555 out.go:239] * 
	* 
	W0617 12:29:01.879721  885555 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0617 12:29:01.881717  885555 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-440919 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-440919
helpers_test.go:235: (dbg) docker inspect old-k8s-version-440919:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2651a58cac1778b7b24079b341e22a56a533af5c409287af65e531963fbbf253",
	        "Created": "2024-06-17T12:20:03.806864351Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 885837,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-06-17T12:22:52.82424401Z",
	            "FinishedAt": "2024-06-17T12:22:51.074764608Z"
	        },
	        "Image": "sha256:d36081176f43c9443534fbd23d834d14507b037430e066481145283247762ade",
	        "ResolvConfPath": "/var/lib/docker/containers/2651a58cac1778b7b24079b341e22a56a533af5c409287af65e531963fbbf253/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2651a58cac1778b7b24079b341e22a56a533af5c409287af65e531963fbbf253/hostname",
	        "HostsPath": "/var/lib/docker/containers/2651a58cac1778b7b24079b341e22a56a533af5c409287af65e531963fbbf253/hosts",
	        "LogPath": "/var/lib/docker/containers/2651a58cac1778b7b24079b341e22a56a533af5c409287af65e531963fbbf253/2651a58cac1778b7b24079b341e22a56a533af5c409287af65e531963fbbf253-json.log",
	        "Name": "/old-k8s-version-440919",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-440919:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-440919",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/9bced0db5eb9b5288250584c9bf08b5a76bbe393d2d56a8e3753c657bc8ffbc1-init/diff:/var/lib/docker/overlay2/c07c2f412fc737ec224babdeaebc84a76c392761a424a81f6ee0a5caa5d8373f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9bced0db5eb9b5288250584c9bf08b5a76bbe393d2d56a8e3753c657bc8ffbc1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9bced0db5eb9b5288250584c9bf08b5a76bbe393d2d56a8e3753c657bc8ffbc1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9bced0db5eb9b5288250584c9bf08b5a76bbe393d2d56a8e3753c657bc8ffbc1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-440919",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-440919/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-440919",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-440919",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-440919",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0c0f15ee99352b062138292207ba9d31a60dee3b4045a59fe519f881a3369bcc",
	            "SandboxKey": "/var/run/docker/netns/0c0f15ee9935",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33832"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33831"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33828"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33830"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33829"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-440919": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "NetworkID": "a8c5ba1322cff7b002bb65fba67a0e4d0f4e8d7b9bd0170cab214228e814808c",
	                    "EndpointID": "40bf29499f3042a33ce2802aabb17ff09f11be8c22909e04eee06facbcdbaff0",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-440919",
	                        "2651a58cac17"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-440919 -n old-k8s-version-440919
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-440919 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-440919 logs -n 25: (2.700310117s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-064909 sudo find                             | cilium-064909             | jenkins | v1.33.1 | 17 Jun 24 12:18 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                          |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                           |         |         |                     |                     |
	| ssh     | -p cilium-064909 sudo crio                             | cilium-064909             | jenkins | v1.33.1 | 17 Jun 24 12:18 UTC |                     |
	|         | config                                                 |                           |         |         |                     |                     |
	| delete  | -p cilium-064909                                       | cilium-064909             | jenkins | v1.33.1 | 17 Jun 24 12:18 UTC | 17 Jun 24 12:18 UTC |
	| start   | -p force-systemd-env-835812                            | force-systemd-env-835812  | jenkins | v1.33.1 | 17 Jun 24 12:18 UTC | 17 Jun 24 12:19 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | -v=5 --driver=docker                                   |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-681619                              | force-systemd-flag-681619 | jenkins | v1.33.1 | 17 Jun 24 12:18 UTC | 17 Jun 24 12:18 UTC |
	|         | ssh cat                                                |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-681619                           | force-systemd-flag-681619 | jenkins | v1.33.1 | 17 Jun 24 12:18 UTC | 17 Jun 24 12:18 UTC |
	| start   | -p cert-expiration-590735                              | cert-expiration-590735    | jenkins | v1.33.1 | 17 Jun 24 12:18 UTC | 17 Jun 24 12:19 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                                   |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-835812                               | force-systemd-env-835812  | jenkins | v1.33.1 | 17 Jun 24 12:19 UTC | 17 Jun 24 12:19 UTC |
	|         | ssh cat                                                |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-835812                            | force-systemd-env-835812  | jenkins | v1.33.1 | 17 Jun 24 12:19 UTC | 17 Jun 24 12:19 UTC |
	| start   | -p cert-options-440034                                 | cert-options-440034       | jenkins | v1.33.1 | 17 Jun 24 12:19 UTC | 17 Jun 24 12:19 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost                            |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                                  |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| ssh     | cert-options-440034 ssh                                | cert-options-440034       | jenkins | v1.33.1 | 17 Jun 24 12:19 UTC | 17 Jun 24 12:19 UTC |
	|         | openssl x509 -text -noout -in                          |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                           |         |         |                     |                     |
	| ssh     | -p cert-options-440034 -- sudo                         | cert-options-440034       | jenkins | v1.33.1 | 17 Jun 24 12:19 UTC | 17 Jun 24 12:19 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                           |         |         |                     |                     |
	| delete  | -p cert-options-440034                                 | cert-options-440034       | jenkins | v1.33.1 | 17 Jun 24 12:19 UTC | 17 Jun 24 12:19 UTC |
	| start   | -p old-k8s-version-440919                              | old-k8s-version-440919    | jenkins | v1.33.1 | 17 Jun 24 12:19 UTC | 17 Jun 24 12:22 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --kvm-network=default                                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                           |         |         |                     |                     |
	|         | --keep-context=false                                   |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-440919        | old-k8s-version-440919    | jenkins | v1.33.1 | 17 Jun 24 12:22 UTC | 17 Jun 24 12:22 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| stop    | -p old-k8s-version-440919                              | old-k8s-version-440919    | jenkins | v1.33.1 | 17 Jun 24 12:22 UTC | 17 Jun 24 12:22 UTC |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| start   | -p cert-expiration-590735                              | cert-expiration-590735    | jenkins | v1.33.1 | 17 Jun 24 12:22 UTC | 17 Jun 24 12:22 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-590735                              | cert-expiration-590735    | jenkins | v1.33.1 | 17 Jun 24 12:22 UTC | 17 Jun 24 12:22 UTC |
	| start   | -p no-preload-969284                                   | no-preload-969284         | jenkins | v1.33.1 | 17 Jun 24 12:22 UTC | 17 Jun 24 12:24 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                           |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-440919             | old-k8s-version-440919    | jenkins | v1.33.1 | 17 Jun 24 12:22 UTC | 17 Jun 24 12:22 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p old-k8s-version-440919                              | old-k8s-version-440919    | jenkins | v1.33.1 | 17 Jun 24 12:22 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --kvm-network=default                                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                           |         |         |                     |                     |
	|         | --keep-context=false                                   |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-969284             | no-preload-969284         | jenkins | v1.33.1 | 17 Jun 24 12:24 UTC | 17 Jun 24 12:24 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| stop    | -p no-preload-969284                                   | no-preload-969284         | jenkins | v1.33.1 | 17 Jun 24 12:24 UTC | 17 Jun 24 12:24 UTC |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-969284                  | no-preload-969284         | jenkins | v1.33.1 | 17 Jun 24 12:24 UTC | 17 Jun 24 12:24 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p no-preload-969284                                   | no-preload-969284         | jenkins | v1.33.1 | 17 Jun 24 12:24 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                           |         |         |                     |                     |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/17 12:24:25
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.22.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0617 12:24:25.861038  891830 out.go:291] Setting OutFile to fd 1 ...
	I0617 12:24:25.861455  891830 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 12:24:25.861469  891830 out.go:304] Setting ErrFile to fd 2...
	I0617 12:24:25.861475  891830 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 12:24:25.861748  891830 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19084-685849/.minikube/bin
	I0617 12:24:25.862138  891830 out.go:298] Setting JSON to false
	I0617 12:24:25.863234  891830 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":14813,"bootTime":1718612253,"procs":251,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1063-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0617 12:24:25.863308  891830 start.go:139] virtualization:  
	I0617 12:24:25.865715  891830 out.go:177] * [no-preload-969284] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0617 12:24:25.868416  891830 out.go:177]   - MINIKUBE_LOCATION=19084
	I0617 12:24:25.870302  891830 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 12:24:25.872048  891830 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19084-685849/kubeconfig
	I0617 12:24:25.869664  891830 notify.go:220] Checking for updates...
	I0617 12:24:25.876093  891830 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19084-685849/.minikube
	I0617 12:24:25.878325  891830 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0617 12:24:25.880082  891830 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0617 12:24:25.882351  891830 config.go:182] Loaded profile config "no-preload-969284": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.1
	I0617 12:24:25.882931  891830 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 12:24:25.904170  891830 docker.go:122] docker version: linux-26.1.4:Docker Engine - Community
	I0617 12:24:25.904292  891830 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0617 12:24:25.974209  891830 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2024-06-17 12:24:25.962894927 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1063-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214892544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d2d58213f83a351ca8f528a95fbd145f5654e957 Expected:d2d58213f83a351ca8f528a95fbd145f5654e957} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.1]] Warnings:<nil>}}
	I0617 12:24:25.974320  891830 docker.go:295] overlay module found
	I0617 12:24:25.976235  891830 out.go:177] * Using the docker driver based on existing profile
	I0617 12:24:25.977895  891830 start.go:297] selected driver: docker
	I0617 12:24:25.977914  891830 start.go:901] validating driver "docker" against &{Name:no-preload-969284 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:no-preload-969284 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false Mount
String:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 12:24:25.978038  891830 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0617 12:24:25.978662  891830 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0617 12:24:26.048920  891830 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2024-06-17 12:24:26.029818087 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1063-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214892544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d2d58213f83a351ca8f528a95fbd145f5654e957 Expected:d2d58213f83a351ca8f528a95fbd145f5654e957} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.1]] Warnings:<nil>}}
	I0617 12:24:26.049286  891830 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0617 12:24:26.049316  891830 cni.go:84] Creating CNI manager for ""
	I0617 12:24:26.049332  891830 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0617 12:24:26.049379  891830 start.go:340] cluster config:
	{Name:no-preload-969284 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:no-preload-969284 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 12:24:26.051249  891830 out.go:177] * Starting "no-preload-969284" primary control-plane node in "no-preload-969284" cluster
	I0617 12:24:26.053018  891830 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0617 12:24:26.056013  891830 out.go:177] * Pulling base image v0.0.44-1718296336-19068 ...
	I0617 12:24:26.057871  891830 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime containerd
	I0617 12:24:26.058024  891830 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/no-preload-969284/config.json ...
	I0617 12:24:26.058357  891830 cache.go:107] acquiring lock: {Name:mk243257e07f919b2ce4ec4a4871d554704b6c64 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 12:24:26.058438  891830 cache.go:115] /home/jenkins/minikube-integration/19084-685849/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0617 12:24:26.058451  891830 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/19084-685849/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 99.625µs
	I0617 12:24:26.058473  891830 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/19084-685849/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0617 12:24:26.058484  891830 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 in local docker daemon
	I0617 12:24:26.058696  891830 cache.go:107] acquiring lock: {Name:mk3e2d26afb72a9caab1e4a57bb7d749940f769c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 12:24:26.058762  891830 cache.go:115] /home/jenkins/minikube-integration/19084-685849/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.1 exists
	I0617 12:24:26.058776  891830 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.30.1" -> "/home/jenkins/minikube-integration/19084-685849/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.1" took 85.282µs
	I0617 12:24:26.058785  891830 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.30.1 -> /home/jenkins/minikube-integration/19084-685849/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.1 succeeded
	I0617 12:24:26.058802  891830 cache.go:107] acquiring lock: {Name:mkf19adb451b68fd38d38d19f1c3e05f63061c95 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 12:24:26.058836  891830 cache.go:115] /home/jenkins/minikube-integration/19084-685849/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.1 exists
	I0617 12:24:26.058846  891830 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.30.1" -> "/home/jenkins/minikube-integration/19084-685849/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.1" took 45.193µs
	I0617 12:24:26.058852  891830 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.30.1 -> /home/jenkins/minikube-integration/19084-685849/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.1 succeeded
	I0617 12:24:26.058862  891830 cache.go:107] acquiring lock: {Name:mk1fbb538aaf8284ef20d6b605d03f95089ee8f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 12:24:26.058893  891830 cache.go:115] /home/jenkins/minikube-integration/19084-685849/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.1 exists
	I0617 12:24:26.058898  891830 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.30.1" -> "/home/jenkins/minikube-integration/19084-685849/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.1" took 37.825µs
	I0617 12:24:26.058909  891830 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.30.1 -> /home/jenkins/minikube-integration/19084-685849/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.1 succeeded
	I0617 12:24:26.058934  891830 cache.go:107] acquiring lock: {Name:mkc22e49c1a8248d48e64588173eb9fb680c06ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 12:24:26.058967  891830 cache.go:115] /home/jenkins/minikube-integration/19084-685849/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.1 exists
	I0617 12:24:26.058977  891830 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.30.1" -> "/home/jenkins/minikube-integration/19084-685849/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.1" took 43.765µs
	I0617 12:24:26.058984  891830 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.30.1 -> /home/jenkins/minikube-integration/19084-685849/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.1 succeeded
	I0617 12:24:26.059001  891830 cache.go:107] acquiring lock: {Name:mke7bf450ee3472ba05814c4e0ae75470b38cae6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 12:24:26.059037  891830 cache.go:115] /home/jenkins/minikube-integration/19084-685849/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 exists
	I0617 12:24:26.059047  891830 cache.go:96] cache image "registry.k8s.io/etcd:3.5.12-0" -> "/home/jenkins/minikube-integration/19084-685849/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0" took 47.088µs
	I0617 12:24:26.059053  891830 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.12-0 -> /home/jenkins/minikube-integration/19084-685849/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 succeeded
	I0617 12:24:26.059070  891830 cache.go:107] acquiring lock: {Name:mkbfc7e8ac248fae5fc14c7b2e671537f17dac51 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 12:24:26.059101  891830 cache.go:115] /home/jenkins/minikube-integration/19084-685849/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0617 12:24:26.059111  891830 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/home/jenkins/minikube-integration/19084-685849/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 42.354µs
	I0617 12:24:26.059117  891830 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /home/jenkins/minikube-integration/19084-685849/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0617 12:24:26.059127  891830 cache.go:107] acquiring lock: {Name:mke364244907222cd431164f5ae50c6a9475132c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 12:24:26.059157  891830 cache.go:115] /home/jenkins/minikube-integration/19084-685849/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0617 12:24:26.059166  891830 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/home/jenkins/minikube-integration/19084-685849/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 40.311µs
	I0617 12:24:26.059172  891830 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /home/jenkins/minikube-integration/19084-685849/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0617 12:24:26.059185  891830 cache.go:87] Successfully saved all images to host disk.
	I0617 12:24:26.084237  891830 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 in local docker daemon, skipping pull
	I0617 12:24:26.084266  891830 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 exists in daemon, skipping load
	I0617 12:24:26.084288  891830 cache.go:194] Successfully downloaded all kic artifacts
	I0617 12:24:26.084317  891830 start.go:360] acquireMachinesLock for no-preload-969284: {Name:mk27a850b1dea80dd0977f7deadc274339c820c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0617 12:24:26.084393  891830 start.go:364] duration metric: took 53.603µs to acquireMachinesLock for "no-preload-969284"
	I0617 12:24:26.084425  891830 start.go:96] Skipping create...Using existing machine configuration
	I0617 12:24:26.084436  891830 fix.go:54] fixHost starting: 
	I0617 12:24:26.084767  891830 cli_runner.go:164] Run: docker container inspect no-preload-969284 --format={{.State.Status}}
	I0617 12:24:26.102486  891830 fix.go:112] recreateIfNeeded on no-preload-969284: state=Stopped err=<nil>
	W0617 12:24:26.102526  891830 fix.go:138] unexpected machine state, will restart: <nil>
	I0617 12:24:26.104697  891830 out.go:177] * Restarting existing docker container for "no-preload-969284" ...
	I0617 12:24:23.656668  885555 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-440919" in "kube-system" namespace has status "Ready":"False"
	I0617 12:24:25.657525  885555 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-440919" in "kube-system" namespace has status "Ready":"False"
	I0617 12:24:26.106683  891830 cli_runner.go:164] Run: docker start no-preload-969284
	I0617 12:24:26.417427  891830 cli_runner.go:164] Run: docker container inspect no-preload-969284 --format={{.State.Status}}
	I0617 12:24:26.434929  891830 kic.go:430] container "no-preload-969284" state is running.
	I0617 12:24:26.435309  891830 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-969284
	I0617 12:24:26.458983  891830 profile.go:143] Saving config to /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/no-preload-969284/config.json ...
	I0617 12:24:26.460203  891830 machine.go:94] provisionDockerMachine start ...
	I0617 12:24:26.460387  891830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-969284
	I0617 12:24:26.483189  891830 main.go:141] libmachine: Using SSH client type: native
	I0617 12:24:26.483500  891830 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2bb0] 0x3e5410 <nil>  [] 0s} 127.0.0.1 33837 <nil> <nil>}
	I0617 12:24:26.483511  891830 main.go:141] libmachine: About to run SSH command:
	hostname
	I0617 12:24:26.484222  891830 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0617 12:24:29.610927  891830 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-969284
	
	I0617 12:24:29.610952  891830 ubuntu.go:169] provisioning hostname "no-preload-969284"
	I0617 12:24:29.611051  891830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-969284
	I0617 12:24:29.629319  891830 main.go:141] libmachine: Using SSH client type: native
	I0617 12:24:29.629562  891830 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2bb0] 0x3e5410 <nil>  [] 0s} 127.0.0.1 33837 <nil> <nil>}
	I0617 12:24:29.629577  891830 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-969284 && echo "no-preload-969284" | sudo tee /etc/hostname
	I0617 12:24:29.768254  891830 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-969284
	
	I0617 12:24:29.768332  891830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-969284
	I0617 12:24:29.784965  891830 main.go:141] libmachine: Using SSH client type: native
	I0617 12:24:29.785215  891830 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2bb0] 0x3e5410 <nil>  [] 0s} 127.0.0.1 33837 <nil> <nil>}
	I0617 12:24:29.785231  891830 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-969284' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-969284/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-969284' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0617 12:24:29.911385  891830 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0617 12:24:29.911412  891830 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19084-685849/.minikube CaCertPath:/home/jenkins/minikube-integration/19084-685849/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19084-685849/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19084-685849/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19084-685849/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19084-685849/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19084-685849/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19084-685849/.minikube}
	I0617 12:24:29.911459  891830 ubuntu.go:177] setting up certificates
	I0617 12:24:29.911470  891830 provision.go:84] configureAuth start
	I0617 12:24:29.911545  891830 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-969284
	I0617 12:24:29.928726  891830 provision.go:143] copyHostCerts
	I0617 12:24:29.928809  891830 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-685849/.minikube/ca.pem, removing ...
	I0617 12:24:29.928825  891830 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-685849/.minikube/ca.pem
	I0617 12:24:29.928903  891830 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-685849/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19084-685849/.minikube/ca.pem (1078 bytes)
	I0617 12:24:29.929004  891830 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-685849/.minikube/cert.pem, removing ...
	I0617 12:24:29.929015  891830 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-685849/.minikube/cert.pem
	I0617 12:24:29.929043  891830 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-685849/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19084-685849/.minikube/cert.pem (1123 bytes)
	I0617 12:24:29.929100  891830 exec_runner.go:144] found /home/jenkins/minikube-integration/19084-685849/.minikube/key.pem, removing ...
	I0617 12:24:29.929110  891830 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19084-685849/.minikube/key.pem
	I0617 12:24:29.929135  891830 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19084-685849/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19084-685849/.minikube/key.pem (1679 bytes)
	I0617 12:24:29.929186  891830 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19084-685849/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19084-685849/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19084-685849/.minikube/certs/ca-key.pem org=jenkins.no-preload-969284 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-969284]
	I0617 12:24:30.125704  891830 provision.go:177] copyRemoteCerts
	I0617 12:24:30.125781  891830 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0617 12:24:30.125846  891830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-969284
	I0617 12:24:30.145361  891830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33837 SSHKeyPath:/home/jenkins/minikube-integration/19084-685849/.minikube/machines/no-preload-969284/id_rsa Username:docker}
	I0617 12:24:30.245405  891830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-685849/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0617 12:24:30.271951  891830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-685849/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0617 12:24:30.298098  891830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-685849/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0617 12:24:30.323276  891830 provision.go:87] duration metric: took 411.791663ms to configureAuth
	I0617 12:24:30.323303  891830 ubuntu.go:193] setting minikube options for container-runtime
	I0617 12:24:30.323537  891830 config.go:182] Loaded profile config "no-preload-969284": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.1
	I0617 12:24:30.323554  891830 machine.go:97] duration metric: took 3.863337294s to provisionDockerMachine
	I0617 12:24:30.323563  891830 start.go:293] postStartSetup for "no-preload-969284" (driver="docker")
	I0617 12:24:30.323581  891830 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0617 12:24:30.323640  891830 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0617 12:24:30.323687  891830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-969284
	I0617 12:24:30.339382  891830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33837 SSHKeyPath:/home/jenkins/minikube-integration/19084-685849/.minikube/machines/no-preload-969284/id_rsa Username:docker}
	I0617 12:24:30.432799  891830 ssh_runner.go:195] Run: cat /etc/os-release
	I0617 12:24:30.435952  891830 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0617 12:24:30.435989  891830 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0617 12:24:30.436004  891830 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0617 12:24:30.436011  891830 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0617 12:24:30.436028  891830 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-685849/.minikube/addons for local assets ...
	I0617 12:24:30.436082  891830 filesync.go:126] Scanning /home/jenkins/minikube-integration/19084-685849/.minikube/files for local assets ...
	I0617 12:24:30.436163  891830 filesync.go:149] local asset: /home/jenkins/minikube-integration/19084-685849/.minikube/files/etc/ssl/certs/6912422.pem -> 6912422.pem in /etc/ssl/certs
	I0617 12:24:30.436266  891830 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0617 12:24:30.444972  891830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-685849/.minikube/files/etc/ssl/certs/6912422.pem --> /etc/ssl/certs/6912422.pem (1708 bytes)
	I0617 12:24:30.470482  891830 start.go:296] duration metric: took 146.896402ms for postStartSetup
	I0617 12:24:30.470643  891830 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0617 12:24:30.470696  891830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-969284
	I0617 12:24:30.486281  891830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33837 SSHKeyPath:/home/jenkins/minikube-integration/19084-685849/.minikube/machines/no-preload-969284/id_rsa Username:docker}
	I0617 12:24:30.576876  891830 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0617 12:24:30.581453  891830 fix.go:56] duration metric: took 4.497009161s for fixHost
	I0617 12:24:30.581477  891830 start.go:83] releasing machines lock for "no-preload-969284", held for 4.497068262s
	I0617 12:24:30.581547  891830 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-969284
	I0617 12:24:30.598277  891830 ssh_runner.go:195] Run: cat /version.json
	I0617 12:24:30.598295  891830 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0617 12:24:30.598338  891830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-969284
	I0617 12:24:30.598353  891830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-969284
	I0617 12:24:30.616820  891830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33837 SSHKeyPath:/home/jenkins/minikube-integration/19084-685849/.minikube/machines/no-preload-969284/id_rsa Username:docker}
	I0617 12:24:30.616939  891830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33837 SSHKeyPath:/home/jenkins/minikube-integration/19084-685849/.minikube/machines/no-preload-969284/id_rsa Username:docker}
	I0617 12:24:30.832874  891830 ssh_runner.go:195] Run: systemctl --version
	I0617 12:24:30.837521  891830 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0617 12:24:30.842267  891830 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0617 12:24:30.860983  891830 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0617 12:24:30.861075  891830 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0617 12:24:30.870909  891830 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0617 12:24:30.870974  891830 start.go:494] detecting cgroup driver to use...
	I0617 12:24:30.871024  891830 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0617 12:24:30.871081  891830 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0617 12:24:30.885699  891830 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0617 12:24:30.897538  891830 docker.go:217] disabling cri-docker service (if available) ...
	I0617 12:24:30.897651  891830 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0617 12:24:30.911257  891830 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0617 12:24:30.923053  891830 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0617 12:24:31.016670  891830 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0617 12:24:31.132456  891830 docker.go:233] disabling docker service ...
	I0617 12:24:31.132529  891830 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0617 12:24:31.147082  891830 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0617 12:24:31.160160  891830 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0617 12:24:31.263004  891830 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0617 12:24:31.353791  891830 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0617 12:24:31.365502  891830 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0617 12:24:31.383915  891830 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0617 12:24:31.396480  891830 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0617 12:24:31.407817  891830 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0617 12:24:31.407965  891830 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0617 12:24:31.424424  891830 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0617 12:24:31.434491  891830 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0617 12:24:31.444493  891830 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0617 12:24:31.454459  891830 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0617 12:24:31.464456  891830 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0617 12:24:31.474580  891830 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0617 12:24:31.485180  891830 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0617 12:24:31.495962  891830 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0617 12:24:31.505571  891830 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0617 12:24:31.514610  891830 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 12:24:31.603265  891830 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0617 12:24:31.769986  891830 start.go:541] Will wait 60s for socket path /run/containerd/containerd.sock
	I0617 12:24:31.770080  891830 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0617 12:24:31.774186  891830 start.go:562] Will wait 60s for crictl version
	I0617 12:24:31.774270  891830 ssh_runner.go:195] Run: which crictl
	I0617 12:24:31.780139  891830 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0617 12:24:31.821535  891830 start.go:578] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.33
	RuntimeApiVersion:  v1
	I0617 12:24:31.821632  891830 ssh_runner.go:195] Run: containerd --version
	I0617 12:24:31.844943  891830 ssh_runner.go:195] Run: containerd --version
	I0617 12:24:31.870538  891830 out.go:177] * Preparing Kubernetes v1.30.1 on containerd 1.6.33 ...
	I0617 12:24:27.657455  885555 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-440919" in "kube-system" namespace has status "Ready":"True"
	I0617 12:24:27.657480  885555 pod_ready.go:81] duration metric: took 1m6.507261232s for pod "kube-controller-manager-old-k8s-version-440919" in "kube-system" namespace to be "Ready" ...
	I0617 12:24:27.657492  885555 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6cbbp" in "kube-system" namespace to be "Ready" ...
	I0617 12:24:27.663106  885555 pod_ready.go:92] pod "kube-proxy-6cbbp" in "kube-system" namespace has status "Ready":"True"
	I0617 12:24:27.663133  885555 pod_ready.go:81] duration metric: took 5.634104ms for pod "kube-proxy-6cbbp" in "kube-system" namespace to be "Ready" ...
	I0617 12:24:27.663144  885555 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-440919" in "kube-system" namespace to be "Ready" ...
	I0617 12:24:29.668964  885555 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-440919" in "kube-system" namespace has status "Ready":"False"
	I0617 12:24:31.670666  885555 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-440919" in "kube-system" namespace has status "Ready":"False"
	I0617 12:24:31.872606  891830 cli_runner.go:164] Run: docker network inspect no-preload-969284 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0617 12:24:31.888092  891830 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0617 12:24:31.891696  891830 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 12:24:31.902537  891830 kubeadm.go:877] updating cluster {Name:no-preload-969284 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:no-preload-969284 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenk
ins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0617 12:24:31.902687  891830 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime containerd
	I0617 12:24:31.902735  891830 ssh_runner.go:195] Run: sudo crictl images --output json
	I0617 12:24:31.939705  891830 containerd.go:627] all images are preloaded for containerd runtime.
	I0617 12:24:31.939726  891830 cache_images.go:84] Images are preloaded, skipping loading
	I0617 12:24:31.939734  891830 kubeadm.go:928] updating node { 192.168.76.2 8443 v1.30.1 containerd true true} ...
	I0617 12:24:31.939836  891830 kubeadm.go:940] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-969284 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:no-preload-969284 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0617 12:24:31.939897  891830 ssh_runner.go:195] Run: sudo crictl info
	I0617 12:24:31.983857  891830 cni.go:84] Creating CNI manager for ""
	I0617 12:24:31.983885  891830 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0617 12:24:31.983896  891830 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0617 12:24:31.983918  891830 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-969284 NodeName:no-preload-969284 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0617 12:24:31.984062  891830 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-969284"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0617 12:24:31.984126  891830 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0617 12:24:31.994079  891830 binaries.go:44] Found k8s binaries, skipping transfer
	I0617 12:24:31.994158  891830 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0617 12:24:32.006995  891830 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0617 12:24:32.028169  891830 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0617 12:24:32.048064  891830 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2171 bytes)
	I0617 12:24:32.067538  891830 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0617 12:24:32.071185  891830 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0617 12:24:32.082891  891830 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 12:24:32.183444  891830 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 12:24:32.198479  891830 certs.go:68] Setting up /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/no-preload-969284 for IP: 192.168.76.2
	I0617 12:24:32.198503  891830 certs.go:194] generating shared ca certs ...
	I0617 12:24:32.198548  891830 certs.go:226] acquiring lock for ca certs: {Name:mkd182a8d082c6d0615c99aed3d4d2e0a9bb102c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:24:32.198728  891830 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19084-685849/.minikube/ca.key
	I0617 12:24:32.198800  891830 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19084-685849/.minikube/proxy-client-ca.key
	I0617 12:24:32.198813  891830 certs.go:256] generating profile certs ...
	I0617 12:24:32.198925  891830 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/no-preload-969284/client.key
	I0617 12:24:32.199032  891830 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/no-preload-969284/apiserver.key.aa109b3d
	I0617 12:24:32.199110  891830 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/no-preload-969284/proxy-client.key
	I0617 12:24:32.199251  891830 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-685849/.minikube/certs/691242.pem (1338 bytes)
	W0617 12:24:32.199316  891830 certs.go:480] ignoring /home/jenkins/minikube-integration/19084-685849/.minikube/certs/691242_empty.pem, impossibly tiny 0 bytes
	I0617 12:24:32.199332  891830 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-685849/.minikube/certs/ca-key.pem (1679 bytes)
	I0617 12:24:32.199370  891830 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-685849/.minikube/certs/ca.pem (1078 bytes)
	I0617 12:24:32.199469  891830 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-685849/.minikube/certs/cert.pem (1123 bytes)
	I0617 12:24:32.199521  891830 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-685849/.minikube/certs/key.pem (1679 bytes)
	I0617 12:24:32.199626  891830 certs.go:484] found cert: /home/jenkins/minikube-integration/19084-685849/.minikube/files/etc/ssl/certs/6912422.pem (1708 bytes)
	I0617 12:24:32.200664  891830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-685849/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0617 12:24:32.228869  891830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-685849/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0617 12:24:32.252957  891830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-685849/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0617 12:24:32.279961  891830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-685849/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0617 12:24:32.307294  891830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/no-preload-969284/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0617 12:24:32.335136  891830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/no-preload-969284/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0617 12:24:32.363868  891830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/no-preload-969284/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0617 12:24:32.409801  891830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/no-preload-969284/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0617 12:24:32.445114  891830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-685849/.minikube/certs/691242.pem --> /usr/share/ca-certificates/691242.pem (1338 bytes)
	I0617 12:24:32.469705  891830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-685849/.minikube/files/etc/ssl/certs/6912422.pem --> /usr/share/ca-certificates/6912422.pem (1708 bytes)
	I0617 12:24:32.495899  891830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19084-685849/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0617 12:24:32.520788  891830 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0617 12:24:32.538902  891830 ssh_runner.go:195] Run: openssl version
	I0617 12:24:32.546251  891830 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6912422.pem && ln -fs /usr/share/ca-certificates/6912422.pem /etc/ssl/certs/6912422.pem"
	I0617 12:24:32.556068  891830 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6912422.pem
	I0617 12:24:32.559283  891830 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 17 11:43 /usr/share/ca-certificates/6912422.pem
	I0617 12:24:32.559366  891830 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6912422.pem
	I0617 12:24:32.565986  891830 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6912422.pem /etc/ssl/certs/3ec20f2e.0"
	I0617 12:24:32.574849  891830 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0617 12:24:32.584124  891830 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:24:32.587410  891830 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 17 11:36 /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:24:32.587504  891830 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0617 12:24:32.594140  891830 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0617 12:24:32.603139  891830 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/691242.pem && ln -fs /usr/share/ca-certificates/691242.pem /etc/ssl/certs/691242.pem"
	I0617 12:24:32.612714  891830 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/691242.pem
	I0617 12:24:32.616642  891830 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 17 11:43 /usr/share/ca-certificates/691242.pem
	I0617 12:24:32.616748  891830 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/691242.pem
	I0617 12:24:32.624262  891830 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/691242.pem /etc/ssl/certs/51391683.0"
	I0617 12:24:32.633634  891830 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0617 12:24:32.637345  891830 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0617 12:24:32.644689  891830 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0617 12:24:32.652170  891830 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0617 12:24:32.659377  891830 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0617 12:24:32.668199  891830 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0617 12:24:32.675928  891830 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0617 12:24:32.682799  891830 kubeadm.go:391] StartCluster: {Name:no-preload-969284 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:no-preload-969284 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins
:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 12:24:32.682908  891830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0617 12:24:32.682970  891830 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0617 12:24:32.722520  891830 cri.go:89] found id: "a03033990d5f4a4a8e56b832e073b41aa33e77fe1e7850ea5979019eeabcbe08"
	I0617 12:24:32.722543  891830 cri.go:89] found id: "f42795d5fdc71f939390bd8229f9fd7d76b12052b8a542b58c5a620139d2f946"
	I0617 12:24:32.722548  891830 cri.go:89] found id: "a38bf049be06442d8d4f209b823da2b400354201d7845d868e368d47ee2a6dcf"
	I0617 12:24:32.722554  891830 cri.go:89] found id: "01ed34dcf68b2a9b78b6f8fe23fa835976ee0d377021798f8d7cebec764710e6"
	I0617 12:24:32.722560  891830 cri.go:89] found id: "62eecdd20028540e531a520f04ce791f3f091b0ef3cd8168b4353214efb573e6"
	I0617 12:24:32.722565  891830 cri.go:89] found id: "c95280c809832f7685f42b6a8ea098a92fa8efb8d9b0a8c39fc09a23b3337500"
	I0617 12:24:32.722568  891830 cri.go:89] found id: "9fa6abcdac59d4ec0a779d25470caa47b7bbb178951b85491171856b54b192f3"
	I0617 12:24:32.722571  891830 cri.go:89] found id: "2f5e7836f49bbcde978b575f5a5e73c8896b6f68373d378f5ac21cdbb3ac9861"
	I0617 12:24:32.722574  891830 cri.go:89] found id: ""
	I0617 12:24:32.722629  891830 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0617 12:24:32.735393  891830 cri.go:116] JSON = null
	W0617 12:24:32.735515  891830 kubeadm.go:398] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0617 12:24:32.735601  891830 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0617 12:24:32.744611  891830 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0617 12:24:32.744683  891830 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0617 12:24:32.744711  891830 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0617 12:24:32.744777  891830 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0617 12:24:32.755331  891830 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0617 12:24:32.755943  891830 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-969284" does not appear in /home/jenkins/minikube-integration/19084-685849/kubeconfig
	I0617 12:24:32.756205  891830 kubeconfig.go:62] /home/jenkins/minikube-integration/19084-685849/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-969284" cluster setting kubeconfig missing "no-preload-969284" context setting]
	I0617 12:24:32.756639  891830 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-685849/kubeconfig: {Name:mk0f1db8295cd0d3b8a0428491dac563579b7b2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:24:32.758005  891830 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0617 12:24:32.768699  891830 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.76.2
	I0617 12:24:32.768772  891830 kubeadm.go:591] duration metric: took 24.039826ms to restartPrimaryControlPlane
	I0617 12:24:32.768787  891830 kubeadm.go:393] duration metric: took 86.000077ms to StartCluster
	I0617 12:24:32.768803  891830 settings.go:142] acquiring lock: {Name:mk2a85dcb9c00537cffe742aea475ca7d2cf09a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:24:32.768860  891830 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19084-685849/kubeconfig
	I0617 12:24:32.769730  891830 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19084-685849/kubeconfig: {Name:mk0f1db8295cd0d3b8a0428491dac563579b7b2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0617 12:24:32.769925  891830 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0617 12:24:32.772985  891830 out.go:177] * Verifying Kubernetes components...
	I0617 12:24:32.770309  891830 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0617 12:24:32.770379  891830 config.go:182] Loaded profile config "no-preload-969284": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.1
	I0617 12:24:32.774682  891830 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0617 12:24:32.774689  891830 addons.go:69] Setting dashboard=true in profile "no-preload-969284"
	I0617 12:24:32.774716  891830 addons.go:234] Setting addon dashboard=true in "no-preload-969284"
	W0617 12:24:32.774722  891830 addons.go:243] addon dashboard should already be in state true
	I0617 12:24:32.774746  891830 host.go:66] Checking if "no-preload-969284" exists ...
	I0617 12:24:32.774683  891830 addons.go:69] Setting storage-provisioner=true in profile "no-preload-969284"
	I0617 12:24:32.774781  891830 addons.go:234] Setting addon storage-provisioner=true in "no-preload-969284"
	W0617 12:24:32.774789  891830 addons.go:243] addon storage-provisioner should already be in state true
	I0617 12:24:32.774809  891830 host.go:66] Checking if "no-preload-969284" exists ...
	I0617 12:24:32.775177  891830 cli_runner.go:164] Run: docker container inspect no-preload-969284 --format={{.State.Status}}
	I0617 12:24:32.775185  891830 cli_runner.go:164] Run: docker container inspect no-preload-969284 --format={{.State.Status}}
	I0617 12:24:32.775741  891830 addons.go:69] Setting metrics-server=true in profile "no-preload-969284"
	I0617 12:24:32.775774  891830 addons.go:234] Setting addon metrics-server=true in "no-preload-969284"
	W0617 12:24:32.775782  891830 addons.go:243] addon metrics-server should already be in state true
	I0617 12:24:32.775807  891830 host.go:66] Checking if "no-preload-969284" exists ...
	I0617 12:24:32.776199  891830 cli_runner.go:164] Run: docker container inspect no-preload-969284 --format={{.State.Status}}
	I0617 12:24:32.776358  891830 addons.go:69] Setting default-storageclass=true in profile "no-preload-969284"
	I0617 12:24:32.776388  891830 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-969284"
	I0617 12:24:32.776651  891830 cli_runner.go:164] Run: docker container inspect no-preload-969284 --format={{.State.Status}}
	I0617 12:24:32.828937  891830 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0617 12:24:32.830976  891830 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0617 12:24:32.830995  891830 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0617 12:24:32.831060  891830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-969284
	I0617 12:24:32.836111  891830 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0617 12:24:32.839017  891830 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0617 12:24:32.841735  891830 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0617 12:24:32.841764  891830 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0617 12:24:32.841827  891830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-969284
	I0617 12:24:32.840711  891830 addons.go:234] Setting addon default-storageclass=true in "no-preload-969284"
	W0617 12:24:32.843613  891830 addons.go:243] addon default-storageclass should already be in state true
	I0617 12:24:32.843645  891830 host.go:66] Checking if "no-preload-969284" exists ...
	I0617 12:24:32.844068  891830 cli_runner.go:164] Run: docker container inspect no-preload-969284 --format={{.State.Status}}
	I0617 12:24:32.861120  891830 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0617 12:24:32.862950  891830 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0617 12:24:32.862973  891830 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0617 12:24:32.863039  891830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-969284
	I0617 12:24:32.899130  891830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33837 SSHKeyPath:/home/jenkins/minikube-integration/19084-685849/.minikube/machines/no-preload-969284/id_rsa Username:docker}
	I0617 12:24:32.912817  891830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33837 SSHKeyPath:/home/jenkins/minikube-integration/19084-685849/.minikube/machines/no-preload-969284/id_rsa Username:docker}
	I0617 12:24:32.922278  891830 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0617 12:24:32.922298  891830 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0617 12:24:32.922362  891830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-969284
	I0617 12:24:32.923874  891830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33837 SSHKeyPath:/home/jenkins/minikube-integration/19084-685849/.minikube/machines/no-preload-969284/id_rsa Username:docker}
	I0617 12:24:32.947175  891830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33837 SSHKeyPath:/home/jenkins/minikube-integration/19084-685849/.minikube/machines/no-preload-969284/id_rsa Username:docker}
	I0617 12:24:32.982482  891830 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0617 12:24:33.088152  891830 node_ready.go:35] waiting up to 6m0s for node "no-preload-969284" to be "Ready" ...
	I0617 12:24:33.158917  891830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0617 12:24:33.271360  891830 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0617 12:24:33.271386  891830 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0617 12:24:33.278939  891830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0617 12:24:33.313989  891830 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0617 12:24:33.314023  891830 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0617 12:24:33.379511  891830 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0617 12:24:33.379541  891830 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0617 12:24:33.382738  891830 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0617 12:24:33.382762  891830 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0617 12:24:33.562652  891830 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0617 12:24:33.562684  891830 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0617 12:24:33.562765  891830 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0617 12:24:33.562779  891830 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0617 12:24:33.712319  891830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0617 12:24:33.729045  891830 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0617 12:24:33.729070  891830 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0617 12:24:33.890332  891830 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0617 12:24:33.890360  891830 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0617 12:24:33.918218  891830 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0617 12:24:33.918299  891830 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0617 12:24:33.959884  891830 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0617 12:24:33.959965  891830 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0617 12:24:34.022969  891830 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0617 12:24:34.023050  891830 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0617 12:24:34.058077  891830 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0617 12:24:34.058157  891830 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0617 12:24:34.090587  891830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0617 12:24:34.189725  885555 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-440919" in "kube-system" namespace has status "Ready":"False"
	I0617 12:24:36.670390  885555 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-440919" in "kube-system" namespace has status "Ready":"False"
	I0617 12:24:37.900537  891830 node_ready.go:49] node "no-preload-969284" has status "Ready":"True"
	I0617 12:24:37.900561  891830 node_ready.go:38] duration metric: took 4.812366295s for node "no-preload-969284" to be "Ready" ...
	I0617 12:24:37.900571  891830 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 12:24:37.974137  891830 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-9sspv" in "kube-system" namespace to be "Ready" ...
	I0617 12:24:38.077009  891830 pod_ready.go:92] pod "coredns-7db6d8ff4d-9sspv" in "kube-system" namespace has status "Ready":"True"
	I0617 12:24:38.077079  891830 pod_ready.go:81] duration metric: took 102.866202ms for pod "coredns-7db6d8ff4d-9sspv" in "kube-system" namespace to be "Ready" ...
	I0617 12:24:38.077107  891830 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-969284" in "kube-system" namespace to be "Ready" ...
	I0617 12:24:38.093106  891830 pod_ready.go:92] pod "etcd-no-preload-969284" in "kube-system" namespace has status "Ready":"True"
	I0617 12:24:38.093180  891830 pod_ready.go:81] duration metric: took 16.051294ms for pod "etcd-no-preload-969284" in "kube-system" namespace to be "Ready" ...
	I0617 12:24:38.093210  891830 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-969284" in "kube-system" namespace to be "Ready" ...
	I0617 12:24:38.134868  891830 pod_ready.go:92] pod "kube-apiserver-no-preload-969284" in "kube-system" namespace has status "Ready":"True"
	I0617 12:24:38.134938  891830 pod_ready.go:81] duration metric: took 41.706666ms for pod "kube-apiserver-no-preload-969284" in "kube-system" namespace to be "Ready" ...
	I0617 12:24:38.134964  891830 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-969284" in "kube-system" namespace to be "Ready" ...
	I0617 12:24:38.158810  891830 pod_ready.go:92] pod "kube-controller-manager-no-preload-969284" in "kube-system" namespace has status "Ready":"True"
	I0617 12:24:38.158880  891830 pod_ready.go:81] duration metric: took 23.895033ms for pod "kube-controller-manager-no-preload-969284" in "kube-system" namespace to be "Ready" ...
	I0617 12:24:38.158907  891830 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-x8dcr" in "kube-system" namespace to be "Ready" ...
	I0617 12:24:38.198369  891830 pod_ready.go:92] pod "kube-proxy-x8dcr" in "kube-system" namespace has status "Ready":"True"
	I0617 12:24:38.198441  891830 pod_ready.go:81] duration metric: took 39.503389ms for pod "kube-proxy-x8dcr" in "kube-system" namespace to be "Ready" ...
	I0617 12:24:38.198468  891830 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-969284" in "kube-system" namespace to be "Ready" ...
	I0617 12:24:38.525378  891830 pod_ready.go:92] pod "kube-scheduler-no-preload-969284" in "kube-system" namespace has status "Ready":"True"
	I0617 12:24:38.525400  891830 pod_ready.go:81] duration metric: took 326.889829ms for pod "kube-scheduler-no-preload-969284" in "kube-system" namespace to be "Ready" ...
	I0617 12:24:38.525416  891830 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace to be "Ready" ...
	I0617 12:24:40.532067  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:24:40.935550  891830 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.776595767s)
	I0617 12:24:40.935664  891830 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.656699508s)
	I0617 12:24:40.956758  891830 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.24439221s)
	I0617 12:24:40.956860  891830 addons.go:475] Verifying addon metrics-server=true in "no-preload-969284"
	I0617 12:24:40.979384  891830 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.888687165s)
	I0617 12:24:40.981507  891830 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-969284 addons enable metrics-server
	
	I0617 12:24:40.983568  891830 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I0617 12:24:38.670551  885555 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-440919" in "kube-system" namespace has status "Ready":"False"
	I0617 12:24:40.670731  885555 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-440919" in "kube-system" namespace has status "Ready":"False"
	I0617 12:24:40.985838  891830 addons.go:510] duration metric: took 8.215518695s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass dashboard]
	I0617 12:24:42.534265  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:24:45.032962  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:24:43.169289  885555 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-440919" in "kube-system" namespace has status "Ready":"False"
	I0617 12:24:45.170728  885555 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-440919" in "kube-system" namespace has status "Ready":"False"
	I0617 12:24:47.532167  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:24:50.032633  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:24:47.672178  885555 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-440919" in "kube-system" namespace has status "Ready":"False"
	I0617 12:24:49.670525  885555 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-440919" in "kube-system" namespace has status "Ready":"True"
	I0617 12:24:49.670549  885555 pod_ready.go:81] duration metric: took 22.007397004s for pod "kube-scheduler-old-k8s-version-440919" in "kube-system" namespace to be "Ready" ...
	I0617 12:24:49.670560  885555 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace to be "Ready" ...
	I0617 12:24:51.676638  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:24:52.531562  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:24:54.532175  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:24:53.677637  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:24:55.678593  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:24:57.033150  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:24:59.035721  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:24:57.680006  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:00.227719  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:01.532365  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:04.031838  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:02.676617  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:04.677161  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:06.677283  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:06.032300  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:08.032393  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:10.033278  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:08.677787  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:11.177266  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:12.531719  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:14.532687  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:13.178144  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:15.677598  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:17.031482  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:19.032430  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:18.176235  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:20.179707  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:21.534654  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:24.031843  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:22.676872  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:24.677972  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:26.679201  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:26.032273  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:28.531386  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:30.531788  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:29.177348  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:31.178088  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:33.032904  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:35.531955  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:33.179700  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:35.677468  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:38.031972  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:40.032215  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:38.177126  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:40.676371  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:42.032691  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:44.531515  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:42.677310  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:45.177712  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:46.532080  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:49.032078  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:47.178194  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:49.676614  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:51.676732  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:51.532239  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:54.035931  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:53.677279  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:56.177048  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:56.531325  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:59.031276  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:25:58.677258  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:26:00.678239  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:26:01.032022  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:26:03.032290  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:26:05.532009  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:26:03.177762  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:26:05.678227  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:26:08.030786  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:26:10.046512  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:26:08.177217  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:26:10.177299  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:26:12.531638  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:26:15.033113  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:26:12.676993  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:26:14.677092  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:26:16.677521  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:26:17.532344  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:26:20.032057  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:26:19.188232  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:26:21.676157  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:26:22.033806  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:26:24.531691  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:26:23.677286  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:26:25.677431  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:26:27.032165  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:26:29.531493  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:26:27.684502  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:26:30.177981  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:26:31.532426  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:26:34.031625  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:26:32.178527  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:26:34.676710  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:26:36.677503  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:26:36.032668  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:26:38.531190  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:26:40.531987  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:26:39.178172  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:26:41.676928  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:26:43.031853  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:26:45.037567  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:26:44.177512  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:26:46.178137  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:26:47.531847  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:26:50.032902  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:26:48.677227  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:26:51.176960  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:26:52.530909  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:26:54.531787  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:26:53.185227  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:26:55.676741  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:26:57.032868  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:26:59.531854  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:26:57.677237  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:00.242260  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:02.032354  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:04.532005  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:02.676826  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:05.177893  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:06.532460  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:09.031763  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:07.178330  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:09.677066  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:11.031814  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:13.531100  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:15.531419  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:12.176887  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:14.177719  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:16.677603  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:18.032120  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:20.032306  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:19.177348  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:21.676735  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:22.532457  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:25.032231  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:23.682194  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:26.177222  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:27.032309  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:29.032780  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:28.177272  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:30.177377  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:31.531909  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:34.032691  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:32.178204  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:34.676512  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:36.532548  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:39.031497  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:37.177445  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:39.177612  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:41.178613  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:41.531851  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:44.032366  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:43.676535  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:45.677032  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:46.530687  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:48.531320  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:50.532784  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:47.677406  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:50.177541  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:53.033079  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:55.033200  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:52.677215  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:55.177797  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:57.535219  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:28:00.150138  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:57.179007  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:27:59.676566  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:28:01.677792  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:28:02.532360  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:28:05.031406  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:28:04.176440  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:28:06.176925  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:28:07.031753  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:28:09.530983  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:28:08.176996  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:28:10.177265  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:28:11.531374  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:28:14.031714  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:28:12.177659  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:28:14.675610  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:28:16.676935  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:28:16.531763  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:28:19.031832  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:28:19.177812  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:28:21.677036  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:28:21.033082  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:28:23.530880  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:28:25.531390  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:28:23.677547  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:28:26.177720  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:28:28.031668  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:28:30.047405  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:28:28.676925  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:28:31.177302  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:28:32.531319  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:28:34.532012  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:28:33.677301  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:28:35.677900  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:28:37.035488  891830 pod_ready.go:102] pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace has status "Ready":"False"
	I0617 12:28:38.530981  891830 pod_ready.go:81] duration metric: took 4m0.005550329s for pod "metrics-server-569cc877fc-j84zj" in "kube-system" namespace to be "Ready" ...
	E0617 12:28:38.531008  891830 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0617 12:28:38.531018  891830 pod_ready.go:38] duration metric: took 4m0.630437252s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 12:28:38.531037  891830 api_server.go:52] waiting for apiserver process to appear ...
	I0617 12:28:38.531065  891830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:28:38.531130  891830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:28:38.583507  891830 cri.go:89] found id: "88428f0be3d18b9b35186a3e2a952464f5105a8a8603ea7296799c5e49ff017e"
	I0617 12:28:38.583583  891830 cri.go:89] found id: "9fa6abcdac59d4ec0a779d25470caa47b7bbb178951b85491171856b54b192f3"
	I0617 12:28:38.583604  891830 cri.go:89] found id: ""
	I0617 12:28:38.583630  891830 logs.go:276] 2 containers: [88428f0be3d18b9b35186a3e2a952464f5105a8a8603ea7296799c5e49ff017e 9fa6abcdac59d4ec0a779d25470caa47b7bbb178951b85491171856b54b192f3]
	I0617 12:28:38.583715  891830 ssh_runner.go:195] Run: which crictl
	I0617 12:28:38.587397  891830 ssh_runner.go:195] Run: which crictl
	I0617 12:28:38.591272  891830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0617 12:28:38.591346  891830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:28:38.632321  891830 cri.go:89] found id: "8111fbc70186e0bb535a00ed645a988d91bb571f841eb3cb159731db62b5c744"
	I0617 12:28:38.632343  891830 cri.go:89] found id: "62eecdd20028540e531a520f04ce791f3f091b0ef3cd8168b4353214efb573e6"
	I0617 12:28:38.632348  891830 cri.go:89] found id: ""
	I0617 12:28:38.632356  891830 logs.go:276] 2 containers: [8111fbc70186e0bb535a00ed645a988d91bb571f841eb3cb159731db62b5c744 62eecdd20028540e531a520f04ce791f3f091b0ef3cd8168b4353214efb573e6]
	I0617 12:28:38.632416  891830 ssh_runner.go:195] Run: which crictl
	I0617 12:28:38.636046  891830 ssh_runner.go:195] Run: which crictl
	I0617 12:28:38.639231  891830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0617 12:28:38.639350  891830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:28:38.686789  891830 cri.go:89] found id: "42056191583d1f5754960ad15b097a576ab50f1c8782794d2926388983ae1fd0"
	I0617 12:28:38.686826  891830 cri.go:89] found id: "a03033990d5f4a4a8e56b832e073b41aa33e77fe1e7850ea5979019eeabcbe08"
	I0617 12:28:38.686832  891830 cri.go:89] found id: ""
	I0617 12:28:38.686840  891830 logs.go:276] 2 containers: [42056191583d1f5754960ad15b097a576ab50f1c8782794d2926388983ae1fd0 a03033990d5f4a4a8e56b832e073b41aa33e77fe1e7850ea5979019eeabcbe08]
	I0617 12:28:38.686917  891830 ssh_runner.go:195] Run: which crictl
	I0617 12:28:38.690801  891830 ssh_runner.go:195] Run: which crictl
	I0617 12:28:38.694756  891830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:28:38.694859  891830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:28:38.740176  891830 cri.go:89] found id: "ab34521c845c14735b9240393d84a4ea139edff737eb52bbcab5142cb4b14d9c"
	I0617 12:28:38.740200  891830 cri.go:89] found id: "c95280c809832f7685f42b6a8ea098a92fa8efb8d9b0a8c39fc09a23b3337500"
	I0617 12:28:38.740207  891830 cri.go:89] found id: ""
	I0617 12:28:38.740214  891830 logs.go:276] 2 containers: [ab34521c845c14735b9240393d84a4ea139edff737eb52bbcab5142cb4b14d9c c95280c809832f7685f42b6a8ea098a92fa8efb8d9b0a8c39fc09a23b3337500]
	I0617 12:28:38.740270  891830 ssh_runner.go:195] Run: which crictl
	I0617 12:28:38.743811  891830 ssh_runner.go:195] Run: which crictl
	I0617 12:28:38.746994  891830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:28:38.747096  891830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:28:38.794970  891830 cri.go:89] found id: "9d7d9f16d42ab7f1ef0a1211251bcc155fa81e5e371261276594bcc82d37cf8a"
	I0617 12:28:38.794993  891830 cri.go:89] found id: "01ed34dcf68b2a9b78b6f8fe23fa835976ee0d377021798f8d7cebec764710e6"
	I0617 12:28:38.794999  891830 cri.go:89] found id: ""
	I0617 12:28:38.795007  891830 logs.go:276] 2 containers: [9d7d9f16d42ab7f1ef0a1211251bcc155fa81e5e371261276594bcc82d37cf8a 01ed34dcf68b2a9b78b6f8fe23fa835976ee0d377021798f8d7cebec764710e6]
	I0617 12:28:38.795121  891830 ssh_runner.go:195] Run: which crictl
	I0617 12:28:38.799006  891830 ssh_runner.go:195] Run: which crictl
	I0617 12:28:38.802864  891830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:28:38.802974  891830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:28:38.852265  891830 cri.go:89] found id: "6984630f73f8c733766dddc7c177b5938f6a2bc7caae1005e88bb8e518c44291"
	I0617 12:28:38.852289  891830 cri.go:89] found id: "2f5e7836f49bbcde978b575f5a5e73c8896b6f68373d378f5ac21cdbb3ac9861"
	I0617 12:28:38.852295  891830 cri.go:89] found id: ""
	I0617 12:28:38.852303  891830 logs.go:276] 2 containers: [6984630f73f8c733766dddc7c177b5938f6a2bc7caae1005e88bb8e518c44291 2f5e7836f49bbcde978b575f5a5e73c8896b6f68373d378f5ac21cdbb3ac9861]
	I0617 12:28:38.852377  891830 ssh_runner.go:195] Run: which crictl
	I0617 12:28:38.855782  891830 ssh_runner.go:195] Run: which crictl
	I0617 12:28:38.859197  891830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0617 12:28:38.859280  891830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:28:38.903619  891830 cri.go:89] found id: "30edefbfd01454454534693710d1b9bd45a0e7977d3dec0f9d75f188f1fa8a94"
	I0617 12:28:38.903652  891830 cri.go:89] found id: "f42795d5fdc71f939390bd8229f9fd7d76b12052b8a542b58c5a620139d2f946"
	I0617 12:28:38.903657  891830 cri.go:89] found id: ""
	I0617 12:28:38.903664  891830 logs.go:276] 2 containers: [30edefbfd01454454534693710d1b9bd45a0e7977d3dec0f9d75f188f1fa8a94 f42795d5fdc71f939390bd8229f9fd7d76b12052b8a542b58c5a620139d2f946]
	I0617 12:28:38.903734  891830 ssh_runner.go:195] Run: which crictl
	I0617 12:28:38.907475  891830 ssh_runner.go:195] Run: which crictl
	I0617 12:28:38.920719  891830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:28:38.920801  891830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:28:38.969561  891830 cri.go:89] found id: "57b20a09fe83db9b72fd5d65c95977aaf4ccbde1c4818369f82d56b0262ab901"
	I0617 12:28:38.969634  891830 cri.go:89] found id: ""
	I0617 12:28:38.969656  891830 logs.go:276] 1 containers: [57b20a09fe83db9b72fd5d65c95977aaf4ccbde1c4818369f82d56b0262ab901]
	I0617 12:28:38.969748  891830 ssh_runner.go:195] Run: which crictl
	I0617 12:28:38.973346  891830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0617 12:28:38.973469  891830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0617 12:28:39.017607  891830 cri.go:89] found id: "001b1f47a30f95ad76e1f203589b8d694c754fb019310df90cecccebeb2d3cdf"
	I0617 12:28:39.017632  891830 cri.go:89] found id: "1dd3c3af6ac6a25bab3ac499673e47d8d966a6d30e2812850ffa8ddfa7ed88a4"
	I0617 12:28:39.017637  891830 cri.go:89] found id: ""
	I0617 12:28:39.017645  891830 logs.go:276] 2 containers: [001b1f47a30f95ad76e1f203589b8d694c754fb019310df90cecccebeb2d3cdf 1dd3c3af6ac6a25bab3ac499673e47d8d966a6d30e2812850ffa8ddfa7ed88a4]
	I0617 12:28:39.017706  891830 ssh_runner.go:195] Run: which crictl
	I0617 12:28:39.021745  891830 ssh_runner.go:195] Run: which crictl
	I0617 12:28:39.025759  891830 logs.go:123] Gathering logs for kindnet [30edefbfd01454454534693710d1b9bd45a0e7977d3dec0f9d75f188f1fa8a94] ...
	I0617 12:28:39.025787  891830 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30edefbfd01454454534693710d1b9bd45a0e7977d3dec0f9d75f188f1fa8a94"
	I0617 12:28:39.085076  891830 logs.go:123] Gathering logs for kubernetes-dashboard [57b20a09fe83db9b72fd5d65c95977aaf4ccbde1c4818369f82d56b0262ab901] ...
	I0617 12:28:39.085111  891830 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57b20a09fe83db9b72fd5d65c95977aaf4ccbde1c4818369f82d56b0262ab901"
	I0617 12:28:39.130416  891830 logs.go:123] Gathering logs for storage-provisioner [1dd3c3af6ac6a25bab3ac499673e47d8d966a6d30e2812850ffa8ddfa7ed88a4] ...
	I0617 12:28:39.130445  891830 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1dd3c3af6ac6a25bab3ac499673e47d8d966a6d30e2812850ffa8ddfa7ed88a4"
	I0617 12:28:39.179356  891830 logs.go:123] Gathering logs for container status ...
	I0617 12:28:39.179386  891830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:28:39.230410  891830 logs.go:123] Gathering logs for kube-apiserver [9fa6abcdac59d4ec0a779d25470caa47b7bbb178951b85491171856b54b192f3] ...
	I0617 12:28:39.230439  891830 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9fa6abcdac59d4ec0a779d25470caa47b7bbb178951b85491171856b54b192f3"
	I0617 12:28:39.294604  891830 logs.go:123] Gathering logs for kube-apiserver [88428f0be3d18b9b35186a3e2a952464f5105a8a8603ea7296799c5e49ff017e] ...
	I0617 12:28:39.294635  891830 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 88428f0be3d18b9b35186a3e2a952464f5105a8a8603ea7296799c5e49ff017e"
	I0617 12:28:39.352678  891830 logs.go:123] Gathering logs for coredns [42056191583d1f5754960ad15b097a576ab50f1c8782794d2926388983ae1fd0] ...
	I0617 12:28:39.352714  891830 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 42056191583d1f5754960ad15b097a576ab50f1c8782794d2926388983ae1fd0"
	I0617 12:28:39.398642  891830 logs.go:123] Gathering logs for coredns [a03033990d5f4a4a8e56b832e073b41aa33e77fe1e7850ea5979019eeabcbe08] ...
	I0617 12:28:39.398672  891830 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a03033990d5f4a4a8e56b832e073b41aa33e77fe1e7850ea5979019eeabcbe08"
	I0617 12:28:39.439777  891830 logs.go:123] Gathering logs for kube-scheduler [c95280c809832f7685f42b6a8ea098a92fa8efb8d9b0a8c39fc09a23b3337500] ...
	I0617 12:28:39.439807  891830 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c95280c809832f7685f42b6a8ea098a92fa8efb8d9b0a8c39fc09a23b3337500"
	I0617 12:28:39.488281  891830 logs.go:123] Gathering logs for storage-provisioner [001b1f47a30f95ad76e1f203589b8d694c754fb019310df90cecccebeb2d3cdf] ...
	I0617 12:28:39.488315  891830 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 001b1f47a30f95ad76e1f203589b8d694c754fb019310df90cecccebeb2d3cdf"
	I0617 12:28:39.526150  891830 logs.go:123] Gathering logs for containerd ...
	I0617 12:28:39.526177  891830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0617 12:28:39.584762  891830 logs.go:123] Gathering logs for kubelet ...
	I0617 12:28:39.584799  891830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0617 12:28:39.629930  891830 logs.go:138] Found kubelet problem: Jun 17 12:24:50 no-preload-969284 kubelet[659]: W0617 12:24:50.765826     659 reflector.go:547] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-969284" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-969284' and this object
	W0617 12:28:39.630219  891830 logs.go:138] Found kubelet problem: Jun 17 12:24:50 no-preload-969284 kubelet[659]: E0617 12:24:50.766036     659 reflector.go:150] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-969284" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-969284' and this object
	I0617 12:28:39.661232  891830 logs.go:123] Gathering logs for kube-scheduler [ab34521c845c14735b9240393d84a4ea139edff737eb52bbcab5142cb4b14d9c] ...
	I0617 12:28:39.661271  891830 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab34521c845c14735b9240393d84a4ea139edff737eb52bbcab5142cb4b14d9c"
	I0617 12:28:39.708135  891830 logs.go:123] Gathering logs for kube-proxy [01ed34dcf68b2a9b78b6f8fe23fa835976ee0d377021798f8d7cebec764710e6] ...
	I0617 12:28:39.708165  891830 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 01ed34dcf68b2a9b78b6f8fe23fa835976ee0d377021798f8d7cebec764710e6"
	I0617 12:28:39.752406  891830 logs.go:123] Gathering logs for kube-controller-manager [2f5e7836f49bbcde978b575f5a5e73c8896b6f68373d378f5ac21cdbb3ac9861] ...
	I0617 12:28:39.752435  891830 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f5e7836f49bbcde978b575f5a5e73c8896b6f68373d378f5ac21cdbb3ac9861"
	I0617 12:28:39.821308  891830 logs.go:123] Gathering logs for etcd [62eecdd20028540e531a520f04ce791f3f091b0ef3cd8168b4353214efb573e6] ...
	I0617 12:28:39.821345  891830 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62eecdd20028540e531a520f04ce791f3f091b0ef3cd8168b4353214efb573e6"
	I0617 12:28:39.877021  891830 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:28:39.877052  891830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 12:28:40.074349  891830 logs.go:123] Gathering logs for etcd [8111fbc70186e0bb535a00ed645a988d91bb571f841eb3cb159731db62b5c744] ...
	I0617 12:28:40.074384  891830 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8111fbc70186e0bb535a00ed645a988d91bb571f841eb3cb159731db62b5c744"
	I0617 12:28:40.125172  891830 logs.go:123] Gathering logs for kube-proxy [9d7d9f16d42ab7f1ef0a1211251bcc155fa81e5e371261276594bcc82d37cf8a] ...
	I0617 12:28:40.125208  891830 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d7d9f16d42ab7f1ef0a1211251bcc155fa81e5e371261276594bcc82d37cf8a"
	I0617 12:28:40.183513  891830 logs.go:123] Gathering logs for kube-controller-manager [6984630f73f8c733766dddc7c177b5938f6a2bc7caae1005e88bb8e518c44291] ...
	I0617 12:28:40.183545  891830 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6984630f73f8c733766dddc7c177b5938f6a2bc7caae1005e88bb8e518c44291"
	I0617 12:28:40.256488  891830 logs.go:123] Gathering logs for kindnet [f42795d5fdc71f939390bd8229f9fd7d76b12052b8a542b58c5a620139d2f946] ...
	I0617 12:28:40.256524  891830 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f42795d5fdc71f939390bd8229f9fd7d76b12052b8a542b58c5a620139d2f946"
	I0617 12:28:40.305458  891830 logs.go:123] Gathering logs for dmesg ...
	I0617 12:28:40.305483  891830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:28:40.326623  891830 out.go:304] Setting ErrFile to fd 2...
	I0617 12:28:40.326649  891830 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0617 12:28:40.326805  891830 out.go:239] X Problems detected in kubelet:
	W0617 12:28:40.326826  891830 out.go:239]   Jun 17 12:24:50 no-preload-969284 kubelet[659]: W0617 12:24:50.765826     659 reflector.go:547] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-969284" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-969284' and this object
	W0617 12:28:40.326863  891830 out.go:239]   Jun 17 12:24:50 no-preload-969284 kubelet[659]: E0617 12:24:50.766036     659 reflector.go:150] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-969284" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-969284' and this object
	I0617 12:28:40.326878  891830 out.go:304] Setting ErrFile to fd 2...
	I0617 12:28:40.326898  891830 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 12:28:38.177641  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:28:40.178592  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:28:42.676558  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:28:45.179191  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:28:50.328252  891830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:28:50.349218  891830 api_server.go:72] duration metric: took 4m17.579258367s to wait for apiserver process to appear ...
	I0617 12:28:50.349240  891830 api_server.go:88] waiting for apiserver healthz status ...
	I0617 12:28:50.349278  891830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:28:50.349334  891830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:28:50.411146  891830 cri.go:89] found id: "88428f0be3d18b9b35186a3e2a952464f5105a8a8603ea7296799c5e49ff017e"
	I0617 12:28:50.411162  891830 cri.go:89] found id: "9fa6abcdac59d4ec0a779d25470caa47b7bbb178951b85491171856b54b192f3"
	I0617 12:28:50.411167  891830 cri.go:89] found id: ""
	I0617 12:28:50.411173  891830 logs.go:276] 2 containers: [88428f0be3d18b9b35186a3e2a952464f5105a8a8603ea7296799c5e49ff017e 9fa6abcdac59d4ec0a779d25470caa47b7bbb178951b85491171856b54b192f3]
	I0617 12:28:50.411255  891830 ssh_runner.go:195] Run: which crictl
	I0617 12:28:50.417698  891830 ssh_runner.go:195] Run: which crictl
	I0617 12:28:50.421877  891830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0617 12:28:50.421942  891830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:28:50.475933  891830 cri.go:89] found id: "8111fbc70186e0bb535a00ed645a988d91bb571f841eb3cb159731db62b5c744"
	I0617 12:28:50.475953  891830 cri.go:89] found id: "62eecdd20028540e531a520f04ce791f3f091b0ef3cd8168b4353214efb573e6"
	I0617 12:28:50.475958  891830 cri.go:89] found id: ""
	I0617 12:28:50.475965  891830 logs.go:276] 2 containers: [8111fbc70186e0bb535a00ed645a988d91bb571f841eb3cb159731db62b5c744 62eecdd20028540e531a520f04ce791f3f091b0ef3cd8168b4353214efb573e6]
	I0617 12:28:50.476020  891830 ssh_runner.go:195] Run: which crictl
	I0617 12:28:50.482196  891830 ssh_runner.go:195] Run: which crictl
	I0617 12:28:50.488086  891830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0617 12:28:50.488156  891830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:28:50.567773  891830 cri.go:89] found id: "42056191583d1f5754960ad15b097a576ab50f1c8782794d2926388983ae1fd0"
	I0617 12:28:50.567793  891830 cri.go:89] found id: "a03033990d5f4a4a8e56b832e073b41aa33e77fe1e7850ea5979019eeabcbe08"
	I0617 12:28:50.567798  891830 cri.go:89] found id: ""
	I0617 12:28:50.567808  891830 logs.go:276] 2 containers: [42056191583d1f5754960ad15b097a576ab50f1c8782794d2926388983ae1fd0 a03033990d5f4a4a8e56b832e073b41aa33e77fe1e7850ea5979019eeabcbe08]
	I0617 12:28:50.567879  891830 ssh_runner.go:195] Run: which crictl
	I0617 12:28:50.574756  891830 ssh_runner.go:195] Run: which crictl
	I0617 12:28:50.582176  891830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:28:50.582281  891830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:28:50.658907  891830 cri.go:89] found id: "ab34521c845c14735b9240393d84a4ea139edff737eb52bbcab5142cb4b14d9c"
	I0617 12:28:50.658936  891830 cri.go:89] found id: "c95280c809832f7685f42b6a8ea098a92fa8efb8d9b0a8c39fc09a23b3337500"
	I0617 12:28:50.658941  891830 cri.go:89] found id: ""
	I0617 12:28:50.658949  891830 logs.go:276] 2 containers: [ab34521c845c14735b9240393d84a4ea139edff737eb52bbcab5142cb4b14d9c c95280c809832f7685f42b6a8ea098a92fa8efb8d9b0a8c39fc09a23b3337500]
	I0617 12:28:50.659004  891830 ssh_runner.go:195] Run: which crictl
	I0617 12:28:50.667217  891830 ssh_runner.go:195] Run: which crictl
	I0617 12:28:50.672165  891830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:28:50.672238  891830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:28:50.725961  891830 cri.go:89] found id: "9d7d9f16d42ab7f1ef0a1211251bcc155fa81e5e371261276594bcc82d37cf8a"
	I0617 12:28:50.725990  891830 cri.go:89] found id: "01ed34dcf68b2a9b78b6f8fe23fa835976ee0d377021798f8d7cebec764710e6"
	I0617 12:28:50.726007  891830 cri.go:89] found id: ""
	I0617 12:28:50.726014  891830 logs.go:276] 2 containers: [9d7d9f16d42ab7f1ef0a1211251bcc155fa81e5e371261276594bcc82d37cf8a 01ed34dcf68b2a9b78b6f8fe23fa835976ee0d377021798f8d7cebec764710e6]
	I0617 12:28:50.726077  891830 ssh_runner.go:195] Run: which crictl
	I0617 12:28:50.729883  891830 ssh_runner.go:195] Run: which crictl
	I0617 12:28:50.733845  891830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:28:50.733934  891830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:28:50.809186  891830 cri.go:89] found id: "6984630f73f8c733766dddc7c177b5938f6a2bc7caae1005e88bb8e518c44291"
	I0617 12:28:50.809291  891830 cri.go:89] found id: "2f5e7836f49bbcde978b575f5a5e73c8896b6f68373d378f5ac21cdbb3ac9861"
	I0617 12:28:50.809300  891830 cri.go:89] found id: ""
	I0617 12:28:50.809308  891830 logs.go:276] 2 containers: [6984630f73f8c733766dddc7c177b5938f6a2bc7caae1005e88bb8e518c44291 2f5e7836f49bbcde978b575f5a5e73c8896b6f68373d378f5ac21cdbb3ac9861]
	I0617 12:28:50.809379  891830 ssh_runner.go:195] Run: which crictl
	I0617 12:28:50.813691  891830 ssh_runner.go:195] Run: which crictl
	I0617 12:28:50.820991  891830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0617 12:28:50.821058  891830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:28:47.683635  885555 pod_ready.go:102] pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace has status "Ready":"False"
	I0617 12:28:49.676757  885555 pod_ready.go:81] duration metric: took 4m0.006182587s for pod "metrics-server-9975d5f86-w7ck9" in "kube-system" namespace to be "Ready" ...
	E0617 12:28:49.676783  885555 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0617 12:28:49.676792  885555 pod_ready.go:38] duration metric: took 5m29.042731503s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0617 12:28:49.676807  885555 api_server.go:52] waiting for apiserver process to appear ...
	I0617 12:28:49.676850  885555 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0617 12:28:49.676918  885555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0617 12:28:49.719861  885555 cri.go:89] found id: "4c8b05cc53338f5377be14688172756dd02f7989ad55761a58b0f4c2aa83b6c5"
	I0617 12:28:49.719881  885555 cri.go:89] found id: "93248ad0d3571533f5f55b355330f125d3756357d5377ca4b5685d1541bf3ea8"
	I0617 12:28:49.719886  885555 cri.go:89] found id: ""
	I0617 12:28:49.719893  885555 logs.go:276] 2 containers: [4c8b05cc53338f5377be14688172756dd02f7989ad55761a58b0f4c2aa83b6c5 93248ad0d3571533f5f55b355330f125d3756357d5377ca4b5685d1541bf3ea8]
	I0617 12:28:49.719948  885555 ssh_runner.go:195] Run: which crictl
	I0617 12:28:49.723530  885555 ssh_runner.go:195] Run: which crictl
	I0617 12:28:49.727012  885555 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0617 12:28:49.727086  885555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0617 12:28:49.767901  885555 cri.go:89] found id: "5ae6ec941dc2fa5dddb29c886ab412ba333c98fb902b2443b1d6d06aa68dc55a"
	I0617 12:28:49.767923  885555 cri.go:89] found id: "d51ff90b464a07bbc7028e9d8452454acff9356047d6f3ccfd9837fad4344e0d"
	I0617 12:28:49.767929  885555 cri.go:89] found id: ""
	I0617 12:28:49.767936  885555 logs.go:276] 2 containers: [5ae6ec941dc2fa5dddb29c886ab412ba333c98fb902b2443b1d6d06aa68dc55a d51ff90b464a07bbc7028e9d8452454acff9356047d6f3ccfd9837fad4344e0d]
	I0617 12:28:49.768003  885555 ssh_runner.go:195] Run: which crictl
	I0617 12:28:49.771901  885555 ssh_runner.go:195] Run: which crictl
	I0617 12:28:49.775152  885555 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0617 12:28:49.775248  885555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0617 12:28:49.815564  885555 cri.go:89] found id: "c28f4777836661aecd3fac7733ba84292508f8e09940637ec697a14b608e8b21"
	I0617 12:28:49.815588  885555 cri.go:89] found id: "69a919aa5c14ae21855b7a11063b8873bc124cd369a2569e33734aa5edc4b63c"
	I0617 12:28:49.815593  885555 cri.go:89] found id: ""
	I0617 12:28:49.815601  885555 logs.go:276] 2 containers: [c28f4777836661aecd3fac7733ba84292508f8e09940637ec697a14b608e8b21 69a919aa5c14ae21855b7a11063b8873bc124cd369a2569e33734aa5edc4b63c]
	I0617 12:28:49.815665  885555 ssh_runner.go:195] Run: which crictl
	I0617 12:28:49.819083  885555 ssh_runner.go:195] Run: which crictl
	I0617 12:28:49.822151  885555 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0617 12:28:49.822254  885555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0617 12:28:49.859763  885555 cri.go:89] found id: "5ea33e0403f502a14bee51b367aa2ab21c3f1c2e8def56f7e25073809a96c24e"
	I0617 12:28:49.859835  885555 cri.go:89] found id: "33b3439c7183458694ca56c7f6e9260911d8c14a7c86c0469b8dc03f505deb0f"
	I0617 12:28:49.859856  885555 cri.go:89] found id: ""
	I0617 12:28:49.859872  885555 logs.go:276] 2 containers: [5ea33e0403f502a14bee51b367aa2ab21c3f1c2e8def56f7e25073809a96c24e 33b3439c7183458694ca56c7f6e9260911d8c14a7c86c0469b8dc03f505deb0f]
	I0617 12:28:49.859933  885555 ssh_runner.go:195] Run: which crictl
	I0617 12:28:49.863286  885555 ssh_runner.go:195] Run: which crictl
	I0617 12:28:49.866606  885555 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0617 12:28:49.866686  885555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0617 12:28:49.904648  885555 cri.go:89] found id: "c39d788e9ddc5e61548640b61f4e70b4d0b0f5c9ffc4fa37603cf44274abcb4b"
	I0617 12:28:49.904671  885555 cri.go:89] found id: "c04cf172cf8b22a2b419169621cc8909b6cc7170c752536e945bf6b4b01bcca0"
	I0617 12:28:49.904676  885555 cri.go:89] found id: ""
	I0617 12:28:49.904683  885555 logs.go:276] 2 containers: [c39d788e9ddc5e61548640b61f4e70b4d0b0f5c9ffc4fa37603cf44274abcb4b c04cf172cf8b22a2b419169621cc8909b6cc7170c752536e945bf6b4b01bcca0]
	I0617 12:28:49.904741  885555 ssh_runner.go:195] Run: which crictl
	I0617 12:28:49.908394  885555 ssh_runner.go:195] Run: which crictl
	I0617 12:28:49.911662  885555 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0617 12:28:49.911730  885555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0617 12:28:49.950580  885555 cri.go:89] found id: "5d81177822de55491501d88ceb395813919c8619721c616af1c12bcc910eab24"
	I0617 12:28:49.950601  885555 cri.go:89] found id: "eeba0dd857ff09e5969b3bcf37ad51875f7624dbde1d304a3dbbade1d64fbf06"
	I0617 12:28:49.950606  885555 cri.go:89] found id: ""
	I0617 12:28:49.950614  885555 logs.go:276] 2 containers: [5d81177822de55491501d88ceb395813919c8619721c616af1c12bcc910eab24 eeba0dd857ff09e5969b3bcf37ad51875f7624dbde1d304a3dbbade1d64fbf06]
	I0617 12:28:49.950672  885555 ssh_runner.go:195] Run: which crictl
	I0617 12:28:49.954456  885555 ssh_runner.go:195] Run: which crictl
	I0617 12:28:49.958400  885555 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0617 12:28:49.958514  885555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0617 12:28:50.011808  885555 cri.go:89] found id: "8677bbcb8def4ca7d9e6734637430dbad8bd9a8def059c11b56ed4db17c04599"
	I0617 12:28:50.011884  885555 cri.go:89] found id: "f315a7ec7452781d6412306a4aa3e993b47b29eaccf0203b0834fc2aeb329242"
	I0617 12:28:50.011923  885555 cri.go:89] found id: ""
	I0617 12:28:50.011959  885555 logs.go:276] 2 containers: [8677bbcb8def4ca7d9e6734637430dbad8bd9a8def059c11b56ed4db17c04599 f315a7ec7452781d6412306a4aa3e993b47b29eaccf0203b0834fc2aeb329242]
	I0617 12:28:50.012049  885555 ssh_runner.go:195] Run: which crictl
	I0617 12:28:50.016362  885555 ssh_runner.go:195] Run: which crictl
	I0617 12:28:50.020466  885555 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:28:50.020545  885555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:28:50.067228  885555 cri.go:89] found id: "362a01bad45f4e4081cbbf8ee77edf080e86ad9cd464e744534abcc40393504f"
	I0617 12:28:50.067251  885555 cri.go:89] found id: ""
	I0617 12:28:50.067263  885555 logs.go:276] 1 containers: [362a01bad45f4e4081cbbf8ee77edf080e86ad9cd464e744534abcc40393504f]
	I0617 12:28:50.067372  885555 ssh_runner.go:195] Run: which crictl
	I0617 12:28:50.071279  885555 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0617 12:28:50.071383  885555 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0617 12:28:50.113796  885555 cri.go:89] found id: "06a68f0d2052e9b0ee97c609a7bc16438fa8b8ff0a43a0157e802993142b7e0d"
	I0617 12:28:50.113820  885555 cri.go:89] found id: "2ece5f594edf5341f8da2c11dc5de2fba23ee9c0c50f9b2a45fcb718aef5173f"
	I0617 12:28:50.113825  885555 cri.go:89] found id: ""
	I0617 12:28:50.113832  885555 logs.go:276] 2 containers: [06a68f0d2052e9b0ee97c609a7bc16438fa8b8ff0a43a0157e802993142b7e0d 2ece5f594edf5341f8da2c11dc5de2fba23ee9c0c50f9b2a45fcb718aef5173f]
	I0617 12:28:50.113892  885555 ssh_runner.go:195] Run: which crictl
	I0617 12:28:50.117850  885555 ssh_runner.go:195] Run: which crictl
	I0617 12:28:50.121493  885555 logs.go:123] Gathering logs for coredns [69a919aa5c14ae21855b7a11063b8873bc124cd369a2569e33734aa5edc4b63c] ...
	I0617 12:28:50.121525  885555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 69a919aa5c14ae21855b7a11063b8873bc124cd369a2569e33734aa5edc4b63c"
	I0617 12:28:50.161831  885555 logs.go:123] Gathering logs for kube-scheduler [5ea33e0403f502a14bee51b367aa2ab21c3f1c2e8def56f7e25073809a96c24e] ...
	I0617 12:28:50.161903  885555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ea33e0403f502a14bee51b367aa2ab21c3f1c2e8def56f7e25073809a96c24e"
	I0617 12:28:50.213147  885555 logs.go:123] Gathering logs for kube-scheduler [33b3439c7183458694ca56c7f6e9260911d8c14a7c86c0469b8dc03f505deb0f] ...
	I0617 12:28:50.213181  885555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33b3439c7183458694ca56c7f6e9260911d8c14a7c86c0469b8dc03f505deb0f"
	I0617 12:28:50.272783  885555 logs.go:123] Gathering logs for kindnet [f315a7ec7452781d6412306a4aa3e993b47b29eaccf0203b0834fc2aeb329242] ...
	I0617 12:28:50.272813  885555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f315a7ec7452781d6412306a4aa3e993b47b29eaccf0203b0834fc2aeb329242"
	I0617 12:28:50.316068  885555 logs.go:123] Gathering logs for kubelet ...
	I0617 12:28:50.316096  885555 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0617 12:28:50.373058  885555 logs.go:138] Found kubelet problem: Jun 17 12:23:20 old-k8s-version-440919 kubelet[665]: E0617 12:23:20.511830     665 reflector.go:138] object-"kube-system"/"kindnet-token-vw9vb": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-vw9vb" is forbidden: User "system:node:old-k8s-version-440919" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-440919' and this object
	W0617 12:28:50.373986  885555 logs.go:138] Found kubelet problem: Jun 17 12:23:20 old-k8s-version-440919 kubelet[665]: E0617 12:23:20.588626     665 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-440919" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-440919' and this object
	W0617 12:28:50.374296  885555 logs.go:138] Found kubelet problem: Jun 17 12:23:20 old-k8s-version-440919 kubelet[665]: E0617 12:23:20.589644     665 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-440919" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-440919' and this object
	W0617 12:28:50.374545  885555 logs.go:138] Found kubelet problem: Jun 17 12:23:20 old-k8s-version-440919 kubelet[665]: E0617 12:23:20.589736     665 reflector.go:138] object-"kube-system"/"coredns-token-ppjrc": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-ppjrc" is forbidden: User "system:node:old-k8s-version-440919" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-440919' and this object
	W0617 12:28:50.374792  885555 logs.go:138] Found kubelet problem: Jun 17 12:23:20 old-k8s-version-440919 kubelet[665]: E0617 12:23:20.589810     665 reflector.go:138] object-"kube-system"/"kube-proxy-token-m6zml": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-m6zml" is forbidden: User "system:node:old-k8s-version-440919" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-440919' and this object
	W0617 12:28:50.375039  885555 logs.go:138] Found kubelet problem: Jun 17 12:23:20 old-k8s-version-440919 kubelet[665]: E0617 12:23:20.636545     665 reflector.go:138] object-"kube-system"/"metrics-server-token-jlld8": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-jlld8" is forbidden: User "system:node:old-k8s-version-440919" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-440919' and this object
	W0617 12:28:50.375291  885555 logs.go:138] Found kubelet problem: Jun 17 12:23:20 old-k8s-version-440919 kubelet[665]: E0617 12:23:20.636646     665 reflector.go:138] object-"kube-system"/"storage-provisioner-token-5zdhm": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-5zdhm" is forbidden: User "system:node:old-k8s-version-440919" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-440919' and this object
	W0617 12:28:50.375542  885555 logs.go:138] Found kubelet problem: Jun 17 12:23:20 old-k8s-version-440919 kubelet[665]: E0617 12:23:20.636709     665 reflector.go:138] object-"default"/"default-token-gzv8z": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-gzv8z" is forbidden: User "system:node:old-k8s-version-440919" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-440919' and this object
	W0617 12:28:50.388578  885555 logs.go:138] Found kubelet problem: Jun 17 12:23:23 old-k8s-version-440919 kubelet[665]: E0617 12:23:23.464647     665 pod_workers.go:191] Error syncing pod b1829b9f-9401-4c9b-a032-45c1d46ad46f ("metrics-server-9975d5f86-w7ck9_kube-system(b1829b9f-9401-4c9b-a032-45c1d46ad46f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0617 12:28:50.388812  885555 logs.go:138] Found kubelet problem: Jun 17 12:23:23 old-k8s-version-440919 kubelet[665]: E0617 12:23:23.740524     665 pod_workers.go:191] Error syncing pod b1829b9f-9401-4c9b-a032-45c1d46ad46f ("metrics-server-9975d5f86-w7ck9_kube-system(b1829b9f-9401-4c9b-a032-45c1d46ad46f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0617 12:28:50.392010  885555 logs.go:138] Found kubelet problem: Jun 17 12:23:38 old-k8s-version-440919 kubelet[665]: E0617 12:23:38.439222     665 pod_workers.go:191] Error syncing pod b1829b9f-9401-4c9b-a032-45c1d46ad46f ("metrics-server-9975d5f86-w7ck9_kube-system(b1829b9f-9401-4c9b-a032-45c1d46ad46f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0617 12:28:50.392888  885555 logs.go:138] Found kubelet problem: Jun 17 12:23:40 old-k8s-version-440919 kubelet[665]: E0617 12:23:40.373162     665 reflector.go:138] object-"kubernetes-dashboard"/"kubernetes-dashboard-token-84qpn": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kubernetes-dashboard-token-84qpn" is forbidden: User "system:node:old-k8s-version-440919" cannot list resource "secrets" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'old-k8s-version-440919' and this object
	W0617 12:28:50.395821  885555 logs.go:138] Found kubelet problem: Jun 17 12:23:49 old-k8s-version-440919 kubelet[665]: E0617 12:23:49.414941     665 pod_workers.go:191] Error syncing pod b1829b9f-9401-4c9b-a032-45c1d46ad46f ("metrics-server-9975d5f86-w7ck9_kube-system(b1829b9f-9401-4c9b-a032-45c1d46ad46f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0617 12:28:50.396422  885555 logs.go:138] Found kubelet problem: Jun 17 12:23:54 old-k8s-version-440919 kubelet[665]: E0617 12:23:54.892167     665 pod_workers.go:191] Error syncing pod 1bf883c3-748b-4cdc-8387-8880e791d486 ("storage-provisioner_kube-system(1bf883c3-748b-4cdc-8387-8880e791d486)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(1bf883c3-748b-4cdc-8387-8880e791d486)"
	W0617 12:28:50.396777  885555 logs.go:138] Found kubelet problem: Jun 17 12:23:54 old-k8s-version-440919 kubelet[665]: E0617 12:23:54.896407     665 pod_workers.go:191] Error syncing pod 32bd42a3-275b-46d8-9681-6824dd924a9e ("dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"
	W0617 12:28:50.397288  885555 logs.go:138] Found kubelet problem: Jun 17 12:23:55 old-k8s-version-440919 kubelet[665]: E0617 12:23:55.902354     665 pod_workers.go:191] Error syncing pod 32bd42a3-275b-46d8-9681-6824dd924a9e ("dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"
	W0617 12:28:50.401230  885555 logs.go:138] Found kubelet problem: Jun 17 12:24:01 old-k8s-version-440919 kubelet[665]: E0617 12:24:01.423695     665 pod_workers.go:191] Error syncing pod b1829b9f-9401-4c9b-a032-45c1d46ad46f ("metrics-server-9975d5f86-w7ck9_kube-system(b1829b9f-9401-4c9b-a032-45c1d46ad46f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0617 12:28:50.401585  885555 logs.go:138] Found kubelet problem: Jun 17 12:24:02 old-k8s-version-440919 kubelet[665]: E0617 12:24:02.748089     665 pod_workers.go:191] Error syncing pod 32bd42a3-275b-46d8-9681-6824dd924a9e ("dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"
	W0617 12:28:50.401912  885555 logs.go:138] Found kubelet problem: Jun 17 12:24:12 old-k8s-version-440919 kubelet[665]: E0617 12:24:12.413178     665 pod_workers.go:191] Error syncing pod b1829b9f-9401-4c9b-a032-45c1d46ad46f ("metrics-server-9975d5f86-w7ck9_kube-system(b1829b9f-9401-4c9b-a032-45c1d46ad46f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0617 12:28:50.402525  885555 logs.go:138] Found kubelet problem: Jun 17 12:24:13 old-k8s-version-440919 kubelet[665]: E0617 12:24:13.949512     665 pod_workers.go:191] Error syncing pod 32bd42a3-275b-46d8-9681-6824dd924a9e ("dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"
	W0617 12:28:50.402860  885555 logs.go:138] Found kubelet problem: Jun 17 12:24:22 old-k8s-version-440919 kubelet[665]: E0617 12:24:22.748069     665 pod_workers.go:191] Error syncing pod 32bd42a3-275b-46d8-9681-6824dd924a9e ("dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"
	W0617 12:28:50.403045  885555 logs.go:138] Found kubelet problem: Jun 17 12:24:24 old-k8s-version-440919 kubelet[665]: E0617 12:24:24.416098     665 pod_workers.go:191] Error syncing pod b1829b9f-9401-4c9b-a032-45c1d46ad46f ("metrics-server-9975d5f86-w7ck9_kube-system(b1829b9f-9401-4c9b-a032-45c1d46ad46f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0617 12:28:50.403230  885555 logs.go:138] Found kubelet problem: Jun 17 12:24:36 old-k8s-version-440919 kubelet[665]: E0617 12:24:36.413401     665 pod_workers.go:191] Error syncing pod b1829b9f-9401-4c9b-a032-45c1d46ad46f ("metrics-server-9975d5f86-w7ck9_kube-system(b1829b9f-9401-4c9b-a032-45c1d46ad46f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0617 12:28:50.404395  885555 logs.go:138] Found kubelet problem: Jun 17 12:24:38 old-k8s-version-440919 kubelet[665]: E0617 12:24:38.002634     665 pod_workers.go:191] Error syncing pod 32bd42a3-275b-46d8-9681-6824dd924a9e ("dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"
	W0617 12:28:50.404759  885555 logs.go:138] Found kubelet problem: Jun 17 12:24:42 old-k8s-version-440919 kubelet[665]: E0617 12:24:42.748531     665 pod_workers.go:191] Error syncing pod 32bd42a3-275b-46d8-9681-6824dd924a9e ("dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"
	W0617 12:28:50.407251  885555 logs.go:138] Found kubelet problem: Jun 17 12:24:51 old-k8s-version-440919 kubelet[665]: E0617 12:24:51.421671     665 pod_workers.go:191] Error syncing pod b1829b9f-9401-4c9b-a032-45c1d46ad46f ("metrics-server-9975d5f86-w7ck9_kube-system(b1829b9f-9401-4c9b-a032-45c1d46ad46f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0617 12:28:50.408889  885555 logs.go:138] Found kubelet problem: Jun 17 12:24:57 old-k8s-version-440919 kubelet[665]: E0617 12:24:57.413303     665 pod_workers.go:191] Error syncing pod 32bd42a3-275b-46d8-9681-6824dd924a9e ("dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"
	W0617 12:28:50.409119  885555 logs.go:138] Found kubelet problem: Jun 17 12:25:02 old-k8s-version-440919 kubelet[665]: E0617 12:25:02.413257     665 pod_workers.go:191] Error syncing pod b1829b9f-9401-4c9b-a032-45c1d46ad46f ("metrics-server-9975d5f86-w7ck9_kube-system(b1829b9f-9401-4c9b-a032-45c1d46ad46f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0617 12:28:50.409473  885555 logs.go:138] Found kubelet problem: Jun 17 12:25:09 old-k8s-version-440919 kubelet[665]: E0617 12:25:09.412555     665 pod_workers.go:191] Error syncing pod 32bd42a3-275b-46d8-9681-6824dd924a9e ("dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"
	W0617 12:28:50.409664  885555 logs.go:138] Found kubelet problem: Jun 17 12:25:17 old-k8s-version-440919 kubelet[665]: E0617 12:25:17.412907     665 pod_workers.go:191] Error syncing pod b1829b9f-9401-4c9b-a032-45c1d46ad46f ("metrics-server-9975d5f86-w7ck9_kube-system(b1829b9f-9401-4c9b-a032-45c1d46ad46f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0617 12:28:50.410806  885555 logs.go:138] Found kubelet problem: Jun 17 12:25:25 old-k8s-version-440919 kubelet[665]: E0617 12:25:25.159131     665 pod_workers.go:191] Error syncing pod 32bd42a3-275b-46d8-9681-6824dd924a9e ("dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"
	W0617 12:28:50.411399  885555 logs.go:138] Found kubelet problem: Jun 17 12:25:32 old-k8s-version-440919 kubelet[665]: E0617 12:25:32.416049     665 pod_workers.go:191] Error syncing pod b1829b9f-9401-4c9b-a032-45c1d46ad46f ("metrics-server-9975d5f86-w7ck9_kube-system(b1829b9f-9401-4c9b-a032-45c1d46ad46f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0617 12:28:50.415883  885555 logs.go:138] Found kubelet problem: Jun 17 12:25:32 old-k8s-version-440919 kubelet[665]: E0617 12:25:32.748275     665 pod_workers.go:191] Error syncing pod 32bd42a3-275b-46d8-9681-6824dd924a9e ("dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"
	W0617 12:28:50.416257  885555 logs.go:138] Found kubelet problem: Jun 17 12:25:44 old-k8s-version-440919 kubelet[665]: E0617 12:25:44.413119     665 pod_workers.go:191] Error syncing pod 32bd42a3-275b-46d8-9681-6824dd924a9e ("dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"
	W0617 12:28:50.416477  885555 logs.go:138] Found kubelet problem: Jun 17 12:25:45 old-k8s-version-440919 kubelet[665]: E0617 12:25:45.413224     665 pod_workers.go:191] Error syncing pod b1829b9f-9401-4c9b-a032-45c1d46ad46f ("metrics-server-9975d5f86-w7ck9_kube-system(b1829b9f-9401-4c9b-a032-45c1d46ad46f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0617 12:28:50.416837  885555 logs.go:138] Found kubelet problem: Jun 17 12:25:59 old-k8s-version-440919 kubelet[665]: E0617 12:25:59.412465     665 pod_workers.go:191] Error syncing pod 32bd42a3-275b-46d8-9681-6824dd924a9e ("dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"
	W0617 12:28:50.417043  885555 logs.go:138] Found kubelet problem: Jun 17 12:26:00 old-k8s-version-440919 kubelet[665]: E0617 12:26:00.413122     665 pod_workers.go:191] Error syncing pod b1829b9f-9401-4c9b-a032-45c1d46ad46f ("metrics-server-9975d5f86-w7ck9_kube-system(b1829b9f-9401-4c9b-a032-45c1d46ad46f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0617 12:28:50.417393  885555 logs.go:138] Found kubelet problem: Jun 17 12:26:13 old-k8s-version-440919 kubelet[665]: E0617 12:26:13.412518     665 pod_workers.go:191] Error syncing pod 32bd42a3-275b-46d8-9681-6824dd924a9e ("dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"
	W0617 12:28:50.420247  885555 logs.go:138] Found kubelet problem: Jun 17 12:26:15 old-k8s-version-440919 kubelet[665]: E0617 12:26:15.420664     665 pod_workers.go:191] Error syncing pod b1829b9f-9401-4c9b-a032-45c1d46ad46f ("metrics-server-9975d5f86-w7ck9_kube-system(b1829b9f-9401-4c9b-a032-45c1d46ad46f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0617 12:28:50.420664  885555 logs.go:138] Found kubelet problem: Jun 17 12:26:27 old-k8s-version-440919 kubelet[665]: E0617 12:26:27.412560     665 pod_workers.go:191] Error syncing pod 32bd42a3-275b-46d8-9681-6824dd924a9e ("dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"
	W0617 12:28:50.420859  885555 logs.go:138] Found kubelet problem: Jun 17 12:26:30 old-k8s-version-440919 kubelet[665]: E0617 12:26:30.415779     665 pod_workers.go:191] Error syncing pod b1829b9f-9401-4c9b-a032-45c1d46ad46f ("metrics-server-9975d5f86-w7ck9_kube-system(b1829b9f-9401-4c9b-a032-45c1d46ad46f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0617 12:28:50.421191  885555 logs.go:138] Found kubelet problem: Jun 17 12:26:42 old-k8s-version-440919 kubelet[665]: E0617 12:26:42.416836     665 pod_workers.go:191] Error syncing pod 32bd42a3-275b-46d8-9681-6824dd924a9e ("dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"
	W0617 12:28:50.421435  885555 logs.go:138] Found kubelet problem: Jun 17 12:26:45 old-k8s-version-440919 kubelet[665]: E0617 12:26:45.413376     665 pod_workers.go:191] Error syncing pod b1829b9f-9401-4c9b-a032-45c1d46ad46f ("metrics-server-9975d5f86-w7ck9_kube-system(b1829b9f-9401-4c9b-a032-45c1d46ad46f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0617 12:28:50.422192  885555 logs.go:138] Found kubelet problem: Jun 17 12:26:56 old-k8s-version-440919 kubelet[665]: E0617 12:26:56.379496     665 pod_workers.go:191] Error syncing pod 32bd42a3-275b-46d8-9681-6824dd924a9e ("dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"
	W0617 12:28:50.422515  885555 logs.go:138] Found kubelet problem: Jun 17 12:26:56 old-k8s-version-440919 kubelet[665]: E0617 12:26:56.413956     665 pod_workers.go:191] Error syncing pod b1829b9f-9401-4c9b-a032-45c1d46ad46f ("metrics-server-9975d5f86-w7ck9_kube-system(b1829b9f-9401-4c9b-a032-45c1d46ad46f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0617 12:28:50.422873  885555 logs.go:138] Found kubelet problem: Jun 17 12:27:02 old-k8s-version-440919 kubelet[665]: E0617 12:27:02.748469     665 pod_workers.go:191] Error syncing pod 32bd42a3-275b-46d8-9681-6824dd924a9e ("dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"
	W0617 12:28:50.423104  885555 logs.go:138] Found kubelet problem: Jun 17 12:27:08 old-k8s-version-440919 kubelet[665]: E0617 12:27:08.418201     665 pod_workers.go:191] Error syncing pod b1829b9f-9401-4c9b-a032-45c1d46ad46f ("metrics-server-9975d5f86-w7ck9_kube-system(b1829b9f-9401-4c9b-a032-45c1d46ad46f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0617 12:28:50.423464  885555 logs.go:138] Found kubelet problem: Jun 17 12:27:15 old-k8s-version-440919 kubelet[665]: E0617 12:27:15.412573     665 pod_workers.go:191] Error syncing pod 32bd42a3-275b-46d8-9681-6824dd924a9e ("dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"
	W0617 12:28:50.423653  885555 logs.go:138] Found kubelet problem: Jun 17 12:27:20 old-k8s-version-440919 kubelet[665]: E0617 12:27:20.413089     665 pod_workers.go:191] Error syncing pod b1829b9f-9401-4c9b-a032-45c1d46ad46f ("metrics-server-9975d5f86-w7ck9_kube-system(b1829b9f-9401-4c9b-a032-45c1d46ad46f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0617 12:28:50.423984  885555 logs.go:138] Found kubelet problem: Jun 17 12:27:27 old-k8s-version-440919 kubelet[665]: E0617 12:27:27.412591     665 pod_workers.go:191] Error syncing pod 32bd42a3-275b-46d8-9681-6824dd924a9e ("dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"
	W0617 12:28:50.424174  885555 logs.go:138] Found kubelet problem: Jun 17 12:27:32 old-k8s-version-440919 kubelet[665]: E0617 12:27:32.413007     665 pod_workers.go:191] Error syncing pod b1829b9f-9401-4c9b-a032-45c1d46ad46f ("metrics-server-9975d5f86-w7ck9_kube-system(b1829b9f-9401-4c9b-a032-45c1d46ad46f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0617 12:28:50.424504  885555 logs.go:138] Found kubelet problem: Jun 17 12:27:41 old-k8s-version-440919 kubelet[665]: E0617 12:27:41.413008     665 pod_workers.go:191] Error syncing pod 32bd42a3-275b-46d8-9681-6824dd924a9e ("dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"
	W0617 12:28:50.424691  885555 logs.go:138] Found kubelet problem: Jun 17 12:27:44 old-k8s-version-440919 kubelet[665]: E0617 12:27:44.412980     665 pod_workers.go:191] Error syncing pod b1829b9f-9401-4c9b-a032-45c1d46ad46f ("metrics-server-9975d5f86-w7ck9_kube-system(b1829b9f-9401-4c9b-a032-45c1d46ad46f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0617 12:28:50.424877  885555 logs.go:138] Found kubelet problem: Jun 17 12:27:55 old-k8s-version-440919 kubelet[665]: E0617 12:27:55.412933     665 pod_workers.go:191] Error syncing pod b1829b9f-9401-4c9b-a032-45c1d46ad46f ("metrics-server-9975d5f86-w7ck9_kube-system(b1829b9f-9401-4c9b-a032-45c1d46ad46f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0617 12:28:50.425209  885555 logs.go:138] Found kubelet problem: Jun 17 12:27:56 old-k8s-version-440919 kubelet[665]: E0617 12:27:56.415878     665 pod_workers.go:191] Error syncing pod 32bd42a3-275b-46d8-9681-6824dd924a9e ("dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"
	W0617 12:28:50.425539  885555 logs.go:138] Found kubelet problem: Jun 17 12:28:07 old-k8s-version-440919 kubelet[665]: E0617 12:28:07.412568     665 pod_workers.go:191] Error syncing pod 32bd42a3-275b-46d8-9681-6824dd924a9e ("dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"
	W0617 12:28:50.425773  885555 logs.go:138] Found kubelet problem: Jun 17 12:28:09 old-k8s-version-440919 kubelet[665]: E0617 12:28:09.413101     665 pod_workers.go:191] Error syncing pod b1829b9f-9401-4c9b-a032-45c1d46ad46f ("metrics-server-9975d5f86-w7ck9_kube-system(b1829b9f-9401-4c9b-a032-45c1d46ad46f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0617 12:28:50.426124  885555 logs.go:138] Found kubelet problem: Jun 17 12:28:21 old-k8s-version-440919 kubelet[665]: E0617 12:28:21.412553     665 pod_workers.go:191] Error syncing pod 32bd42a3-275b-46d8-9681-6824dd924a9e ("dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"
	W0617 12:28:50.426317  885555 logs.go:138] Found kubelet problem: Jun 17 12:28:22 old-k8s-version-440919 kubelet[665]: E0617 12:28:22.412972     665 pod_workers.go:191] Error syncing pod b1829b9f-9401-4c9b-a032-45c1d46ad46f ("metrics-server-9975d5f86-w7ck9_kube-system(b1829b9f-9401-4c9b-a032-45c1d46ad46f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0617 12:28:50.426650  885555 logs.go:138] Found kubelet problem: Jun 17 12:28:32 old-k8s-version-440919 kubelet[665]: E0617 12:28:32.412994     665 pod_workers.go:191] Error syncing pod 32bd42a3-275b-46d8-9681-6824dd924a9e ("dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"
	W0617 12:28:50.426839  885555 logs.go:138] Found kubelet problem: Jun 17 12:28:37 old-k8s-version-440919 kubelet[665]: E0617 12:28:37.413026     665 pod_workers.go:191] Error syncing pod b1829b9f-9401-4c9b-a032-45c1d46ad46f ("metrics-server-9975d5f86-w7ck9_kube-system(b1829b9f-9401-4c9b-a032-45c1d46ad46f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0617 12:28:50.427171  885555 logs.go:138] Found kubelet problem: Jun 17 12:28:46 old-k8s-version-440919 kubelet[665]: E0617 12:28:46.412739     665 pod_workers.go:191] Error syncing pod 32bd42a3-275b-46d8-9681-6824dd924a9e ("dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"
	W0617 12:28:50.427359  885555 logs.go:138] Found kubelet problem: Jun 17 12:28:48 old-k8s-version-440919 kubelet[665]: E0617 12:28:48.413238     665 pod_workers.go:191] Error syncing pod b1829b9f-9401-4c9b-a032-45c1d46ad46f ("metrics-server-9975d5f86-w7ck9_kube-system(b1829b9f-9401-4c9b-a032-45c1d46ad46f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0617 12:28:50.427371  885555 logs.go:123] Gathering logs for kube-apiserver [4c8b05cc53338f5377be14688172756dd02f7989ad55761a58b0f4c2aa83b6c5] ...
	I0617 12:28:50.427386  885555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c8b05cc53338f5377be14688172756dd02f7989ad55761a58b0f4c2aa83b6c5"
	I0617 12:28:50.518286  885555 logs.go:123] Gathering logs for coredns [c28f4777836661aecd3fac7733ba84292508f8e09940637ec697a14b608e8b21] ...
	I0617 12:28:50.518323  885555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c28f4777836661aecd3fac7733ba84292508f8e09940637ec697a14b608e8b21"
	I0617 12:28:50.569465  885555 logs.go:123] Gathering logs for storage-provisioner [06a68f0d2052e9b0ee97c609a7bc16438fa8b8ff0a43a0157e802993142b7e0d] ...
	I0617 12:28:50.569499  885555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 06a68f0d2052e9b0ee97c609a7bc16438fa8b8ff0a43a0157e802993142b7e0d"
	I0617 12:28:50.616569  885555 logs.go:123] Gathering logs for kube-apiserver [93248ad0d3571533f5f55b355330f125d3756357d5377ca4b5685d1541bf3ea8] ...
	I0617 12:28:50.616598  885555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93248ad0d3571533f5f55b355330f125d3756357d5377ca4b5685d1541bf3ea8"
	I0617 12:28:50.717482  885555 logs.go:123] Gathering logs for kindnet [8677bbcb8def4ca7d9e6734637430dbad8bd9a8def059c11b56ed4db17c04599] ...
	I0617 12:28:50.718134  885555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8677bbcb8def4ca7d9e6734637430dbad8bd9a8def059c11b56ed4db17c04599"
	I0617 12:28:50.778810  885555 logs.go:123] Gathering logs for kubernetes-dashboard [362a01bad45f4e4081cbbf8ee77edf080e86ad9cd464e744534abcc40393504f] ...
	I0617 12:28:50.778909  885555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 362a01bad45f4e4081cbbf8ee77edf080e86ad9cd464e744534abcc40393504f"
	I0617 12:28:50.839775  885555 logs.go:123] Gathering logs for kube-proxy [c39d788e9ddc5e61548640b61f4e70b4d0b0f5c9ffc4fa37603cf44274abcb4b] ...
	I0617 12:28:50.839804  885555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c39d788e9ddc5e61548640b61f4e70b4d0b0f5c9ffc4fa37603cf44274abcb4b"
	I0617 12:28:50.942314  885555 logs.go:123] Gathering logs for kube-controller-manager [5d81177822de55491501d88ceb395813919c8619721c616af1c12bcc910eab24] ...
	I0617 12:28:50.942344  885555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d81177822de55491501d88ceb395813919c8619721c616af1c12bcc910eab24"
	I0617 12:28:51.055138  885555 logs.go:123] Gathering logs for dmesg ...
	I0617 12:28:51.055216  885555 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:28:51.085812  885555 logs.go:123] Gathering logs for etcd [5ae6ec941dc2fa5dddb29c886ab412ba333c98fb902b2443b1d6d06aa68dc55a] ...
	I0617 12:28:51.085897  885555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ae6ec941dc2fa5dddb29c886ab412ba333c98fb902b2443b1d6d06aa68dc55a"
	I0617 12:28:51.162029  885555 logs.go:123] Gathering logs for etcd [d51ff90b464a07bbc7028e9d8452454acff9356047d6f3ccfd9837fad4344e0d] ...
	I0617 12:28:51.162062  885555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d51ff90b464a07bbc7028e9d8452454acff9356047d6f3ccfd9837fad4344e0d"
	I0617 12:28:51.238485  885555 logs.go:123] Gathering logs for storage-provisioner [2ece5f594edf5341f8da2c11dc5de2fba23ee9c0c50f9b2a45fcb718aef5173f] ...
	I0617 12:28:51.238519  885555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ece5f594edf5341f8da2c11dc5de2fba23ee9c0c50f9b2a45fcb718aef5173f"
	I0617 12:28:51.292421  885555 logs.go:123] Gathering logs for containerd ...
	I0617 12:28:51.292450  885555 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0617 12:28:51.374150  885555 logs.go:123] Gathering logs for container status ...
	I0617 12:28:51.374186  885555 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:28:51.457123  885555 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:28:51.457153  885555 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 12:28:51.697650  885555 logs.go:123] Gathering logs for kube-proxy [c04cf172cf8b22a2b419169621cc8909b6cc7170c752536e945bf6b4b01bcca0] ...
	I0617 12:28:51.697684  885555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c04cf172cf8b22a2b419169621cc8909b6cc7170c752536e945bf6b4b01bcca0"
	I0617 12:28:51.750523  885555 logs.go:123] Gathering logs for kube-controller-manager [eeba0dd857ff09e5969b3bcf37ad51875f7624dbde1d304a3dbbade1d64fbf06] ...
	I0617 12:28:51.750557  885555 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eeba0dd857ff09e5969b3bcf37ad51875f7624dbde1d304a3dbbade1d64fbf06"
	I0617 12:28:51.859712  885555 out.go:304] Setting ErrFile to fd 2...
	I0617 12:28:51.859745  885555 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0617 12:28:51.859810  885555 out.go:239] X Problems detected in kubelet:
	W0617 12:28:51.859828  885555 out.go:239]   Jun 17 12:28:22 old-k8s-version-440919 kubelet[665]: E0617 12:28:22.412972     665 pod_workers.go:191] Error syncing pod b1829b9f-9401-4c9b-a032-45c1d46ad46f ("metrics-server-9975d5f86-w7ck9_kube-system(b1829b9f-9401-4c9b-a032-45c1d46ad46f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0617 12:28:51.859846  885555 out.go:239]   Jun 17 12:28:32 old-k8s-version-440919 kubelet[665]: E0617 12:28:32.412994     665 pod_workers.go:191] Error syncing pod 32bd42a3-275b-46d8-9681-6824dd924a9e ("dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"
	W0617 12:28:51.859855  885555 out.go:239]   Jun 17 12:28:37 old-k8s-version-440919 kubelet[665]: E0617 12:28:37.413026     665 pod_workers.go:191] Error syncing pod b1829b9f-9401-4c9b-a032-45c1d46ad46f ("metrics-server-9975d5f86-w7ck9_kube-system(b1829b9f-9401-4c9b-a032-45c1d46ad46f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0617 12:28:51.859862  885555 out.go:239]   Jun 17 12:28:46 old-k8s-version-440919 kubelet[665]: E0617 12:28:46.412739     665 pod_workers.go:191] Error syncing pod 32bd42a3-275b-46d8-9681-6824dd924a9e ("dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"
	W0617 12:28:51.859872  885555 out.go:239]   Jun 17 12:28:48 old-k8s-version-440919 kubelet[665]: E0617 12:28:48.413238     665 pod_workers.go:191] Error syncing pod b1829b9f-9401-4c9b-a032-45c1d46ad46f ("metrics-server-9975d5f86-w7ck9_kube-system(b1829b9f-9401-4c9b-a032-45c1d46ad46f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0617 12:28:51.859882  885555 out.go:304] Setting ErrFile to fd 2...
	I0617 12:28:51.859888  885555 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 12:28:50.896239  891830 cri.go:89] found id: "30edefbfd01454454534693710d1b9bd45a0e7977d3dec0f9d75f188f1fa8a94"
	I0617 12:28:50.896313  891830 cri.go:89] found id: "f42795d5fdc71f939390bd8229f9fd7d76b12052b8a542b58c5a620139d2f946"
	I0617 12:28:50.896331  891830 cri.go:89] found id: ""
	I0617 12:28:50.896357  891830 logs.go:276] 2 containers: [30edefbfd01454454534693710d1b9bd45a0e7977d3dec0f9d75f188f1fa8a94 f42795d5fdc71f939390bd8229f9fd7d76b12052b8a542b58c5a620139d2f946]
	I0617 12:28:50.896451  891830 ssh_runner.go:195] Run: which crictl
	I0617 12:28:50.903203  891830 ssh_runner.go:195] Run: which crictl
	I0617 12:28:50.910588  891830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0617 12:28:50.910699  891830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0617 12:28:50.994816  891830 cri.go:89] found id: "57b20a09fe83db9b72fd5d65c95977aaf4ccbde1c4818369f82d56b0262ab901"
	I0617 12:28:50.994907  891830 cri.go:89] found id: ""
	I0617 12:28:50.994930  891830 logs.go:276] 1 containers: [57b20a09fe83db9b72fd5d65c95977aaf4ccbde1c4818369f82d56b0262ab901]
	I0617 12:28:50.995049  891830 ssh_runner.go:195] Run: which crictl
	I0617 12:28:51.001991  891830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0617 12:28:51.002160  891830 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0617 12:28:51.069316  891830 cri.go:89] found id: "001b1f47a30f95ad76e1f203589b8d694c754fb019310df90cecccebeb2d3cdf"
	I0617 12:28:51.069339  891830 cri.go:89] found id: "1dd3c3af6ac6a25bab3ac499673e47d8d966a6d30e2812850ffa8ddfa7ed88a4"
	I0617 12:28:51.069345  891830 cri.go:89] found id: ""
	I0617 12:28:51.069352  891830 logs.go:276] 2 containers: [001b1f47a30f95ad76e1f203589b8d694c754fb019310df90cecccebeb2d3cdf 1dd3c3af6ac6a25bab3ac499673e47d8d966a6d30e2812850ffa8ddfa7ed88a4]
	I0617 12:28:51.069413  891830 ssh_runner.go:195] Run: which crictl
	I0617 12:28:51.074338  891830 ssh_runner.go:195] Run: which crictl
	I0617 12:28:51.078772  891830 logs.go:123] Gathering logs for kubelet ...
	I0617 12:28:51.078796  891830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0617 12:28:51.137490  891830 logs.go:138] Found kubelet problem: Jun 17 12:24:50 no-preload-969284 kubelet[659]: W0617 12:24:50.765826     659 reflector.go:547] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-969284" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-969284' and this object
	W0617 12:28:51.137779  891830 logs.go:138] Found kubelet problem: Jun 17 12:24:50 no-preload-969284 kubelet[659]: E0617 12:24:50.766036     659 reflector.go:150] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-969284" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-969284' and this object
	I0617 12:28:51.178883  891830 logs.go:123] Gathering logs for kube-apiserver [9fa6abcdac59d4ec0a779d25470caa47b7bbb178951b85491171856b54b192f3] ...
	I0617 12:28:51.178983  891830 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9fa6abcdac59d4ec0a779d25470caa47b7bbb178951b85491171856b54b192f3"
	I0617 12:28:51.258038  891830 logs.go:123] Gathering logs for etcd [8111fbc70186e0bb535a00ed645a988d91bb571f841eb3cb159731db62b5c744] ...
	I0617 12:28:51.258124  891830 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8111fbc70186e0bb535a00ed645a988d91bb571f841eb3cb159731db62b5c744"
	I0617 12:28:51.335681  891830 logs.go:123] Gathering logs for coredns [42056191583d1f5754960ad15b097a576ab50f1c8782794d2926388983ae1fd0] ...
	I0617 12:28:51.335765  891830 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 42056191583d1f5754960ad15b097a576ab50f1c8782794d2926388983ae1fd0"
	I0617 12:28:51.421071  891830 logs.go:123] Gathering logs for kube-proxy [9d7d9f16d42ab7f1ef0a1211251bcc155fa81e5e371261276594bcc82d37cf8a] ...
	I0617 12:28:51.421195  891830 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d7d9f16d42ab7f1ef0a1211251bcc155fa81e5e371261276594bcc82d37cf8a"
	I0617 12:28:51.500869  891830 logs.go:123] Gathering logs for kube-controller-manager [2f5e7836f49bbcde978b575f5a5e73c8896b6f68373d378f5ac21cdbb3ac9861] ...
	I0617 12:28:51.500958  891830 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f5e7836f49bbcde978b575f5a5e73c8896b6f68373d378f5ac21cdbb3ac9861"
	I0617 12:28:51.586353  891830 logs.go:123] Gathering logs for container status ...
	I0617 12:28:51.586389  891830 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0617 12:28:51.659848  891830 logs.go:123] Gathering logs for describe nodes ...
	I0617 12:28:51.659880  891830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0617 12:28:51.865832  891830 logs.go:123] Gathering logs for kube-apiserver [88428f0be3d18b9b35186a3e2a952464f5105a8a8603ea7296799c5e49ff017e] ...
	I0617 12:28:51.865859  891830 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 88428f0be3d18b9b35186a3e2a952464f5105a8a8603ea7296799c5e49ff017e"
	I0617 12:28:51.926706  891830 logs.go:123] Gathering logs for etcd [62eecdd20028540e531a520f04ce791f3f091b0ef3cd8168b4353214efb573e6] ...
	I0617 12:28:51.926742  891830 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62eecdd20028540e531a520f04ce791f3f091b0ef3cd8168b4353214efb573e6"
	I0617 12:28:51.974501  891830 logs.go:123] Gathering logs for kube-scheduler [ab34521c845c14735b9240393d84a4ea139edff737eb52bbcab5142cb4b14d9c] ...
	I0617 12:28:51.974529  891830 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab34521c845c14735b9240393d84a4ea139edff737eb52bbcab5142cb4b14d9c"
	I0617 12:28:52.022493  891830 logs.go:123] Gathering logs for kindnet [30edefbfd01454454534693710d1b9bd45a0e7977d3dec0f9d75f188f1fa8a94] ...
	I0617 12:28:52.022524  891830 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30edefbfd01454454534693710d1b9bd45a0e7977d3dec0f9d75f188f1fa8a94"
	I0617 12:28:52.078033  891830 logs.go:123] Gathering logs for kindnet [f42795d5fdc71f939390bd8229f9fd7d76b12052b8a542b58c5a620139d2f946] ...
	I0617 12:28:52.078060  891830 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f42795d5fdc71f939390bd8229f9fd7d76b12052b8a542b58c5a620139d2f946"
	I0617 12:28:52.116129  891830 logs.go:123] Gathering logs for kube-controller-manager [6984630f73f8c733766dddc7c177b5938f6a2bc7caae1005e88bb8e518c44291] ...
	I0617 12:28:52.116156  891830 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6984630f73f8c733766dddc7c177b5938f6a2bc7caae1005e88bb8e518c44291"
	I0617 12:28:52.176802  891830 logs.go:123] Gathering logs for storage-provisioner [001b1f47a30f95ad76e1f203589b8d694c754fb019310df90cecccebeb2d3cdf] ...
	I0617 12:28:52.176841  891830 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 001b1f47a30f95ad76e1f203589b8d694c754fb019310df90cecccebeb2d3cdf"
	I0617 12:28:52.224116  891830 logs.go:123] Gathering logs for storage-provisioner [1dd3c3af6ac6a25bab3ac499673e47d8d966a6d30e2812850ffa8ddfa7ed88a4] ...
	I0617 12:28:52.224152  891830 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1dd3c3af6ac6a25bab3ac499673e47d8d966a6d30e2812850ffa8ddfa7ed88a4"
	I0617 12:28:52.265939  891830 logs.go:123] Gathering logs for containerd ...
	I0617 12:28:52.265969  891830 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0617 12:28:52.328472  891830 logs.go:123] Gathering logs for dmesg ...
	I0617 12:28:52.328510  891830 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0617 12:28:52.348486  891830 logs.go:123] Gathering logs for coredns [a03033990d5f4a4a8e56b832e073b41aa33e77fe1e7850ea5979019eeabcbe08] ...
	I0617 12:28:52.348517  891830 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a03033990d5f4a4a8e56b832e073b41aa33e77fe1e7850ea5979019eeabcbe08"
	I0617 12:28:52.386782  891830 logs.go:123] Gathering logs for kube-scheduler [c95280c809832f7685f42b6a8ea098a92fa8efb8d9b0a8c39fc09a23b3337500] ...
	I0617 12:28:52.386819  891830 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c95280c809832f7685f42b6a8ea098a92fa8efb8d9b0a8c39fc09a23b3337500"
	I0617 12:28:52.445789  891830 logs.go:123] Gathering logs for kube-proxy [01ed34dcf68b2a9b78b6f8fe23fa835976ee0d377021798f8d7cebec764710e6] ...
	I0617 12:28:52.445823  891830 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 01ed34dcf68b2a9b78b6f8fe23fa835976ee0d377021798f8d7cebec764710e6"
	I0617 12:28:52.488479  891830 logs.go:123] Gathering logs for kubernetes-dashboard [57b20a09fe83db9b72fd5d65c95977aaf4ccbde1c4818369f82d56b0262ab901] ...
	I0617 12:28:52.488508  891830 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57b20a09fe83db9b72fd5d65c95977aaf4ccbde1c4818369f82d56b0262ab901"
	I0617 12:28:52.534225  891830 out.go:304] Setting ErrFile to fd 2...
	I0617 12:28:52.534250  891830 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0617 12:28:52.534327  891830 out.go:239] X Problems detected in kubelet:
	W0617 12:28:52.534343  891830 out.go:239]   Jun 17 12:24:50 no-preload-969284 kubelet[659]: W0617 12:24:50.765826     659 reflector.go:547] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-969284" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-969284' and this object
	W0617 12:28:52.534462  891830 out.go:239]   Jun 17 12:24:50 no-preload-969284 kubelet[659]: E0617 12:24:50.766036     659 reflector.go:150] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-969284" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-969284' and this object
	I0617 12:28:52.534480  891830 out.go:304] Setting ErrFile to fd 2...
	I0617 12:28:52.534487  891830 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 12:29:01.861302  885555 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:29:01.873993  885555 api_server.go:72] duration metric: took 6m1.227082782s to wait for apiserver process to appear ...
	I0617 12:29:01.874018  885555 api_server.go:88] waiting for apiserver healthz status ...
	I0617 12:29:01.876566  885555 out.go:177] 
	W0617 12:29:01.878398  885555 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: cluster wait timed out during healthz check
	W0617 12:29:01.878433  885555 out.go:239] * 
	W0617 12:29:01.879721  885555 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0617 12:29:01.881717  885555 out.go:177] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	792bab234ef0a       523cad1a4df73       2 minutes ago       Exited              dashboard-metrics-scraper   5                   863cf2dba2c1b       dashboard-metrics-scraper-8d5bb5db8-6m7v7
	06a68f0d2052e       ba04bb24b9575       4 minutes ago       Running             storage-provisioner         2                   b6a6c1ea99550       storage-provisioner
	362a01bad45f4       20b332c9a70d8       5 minutes ago       Running             kubernetes-dashboard        0                   d51aaf28922d6       kubernetes-dashboard-cd95d586-jq8f7
	3152a9f89cd17       1611cd07b61d5       5 minutes ago       Running             busybox                     1                   fc406ec7fd016       busybox
	c28f477783666       db91994f4ee8f       5 minutes ago       Running             coredns                     1                   789d0e4fe4a8e       coredns-74ff55c5b-z5m4s
	c39d788e9ddc5       25a5233254979       5 minutes ago       Running             kube-proxy                  1                   610c08d90c969       kube-proxy-6cbbp
	2ece5f594edf5       ba04bb24b9575       5 minutes ago       Exited              storage-provisioner         1                   b6a6c1ea99550       storage-provisioner
	8677bbcb8def4       89d73d416b992       5 minutes ago       Running             kindnet-cni                 1                   7b20a441ff314       kindnet-bb5xg
	5ea33e0403f50       e7605f88f17d6       5 minutes ago       Running             kube-scheduler              1                   8ee41b49da855       kube-scheduler-old-k8s-version-440919
	5d81177822de5       1df8a2b116bd1       5 minutes ago       Running             kube-controller-manager     1                   88c3ab50d814b       kube-controller-manager-old-k8s-version-440919
	5ae6ec941dc2f       05b738aa1bc63       5 minutes ago       Running             etcd                        1                   6e4f6f82b46dd       etcd-old-k8s-version-440919
	4c8b05cc53338       2c08bbbc02d3a       5 minutes ago       Running             kube-apiserver              1                   405a32ed0c2f8       kube-apiserver-old-k8s-version-440919
	fc28d9cfdd7d1       1611cd07b61d5       6 minutes ago       Exited              busybox                     0                   6aca116109fc1       busybox
	69a919aa5c14a       db91994f4ee8f       7 minutes ago       Exited              coredns                     0                   fb4a827073f3c       coredns-74ff55c5b-z5m4s
	f315a7ec74527       89d73d416b992       8 minutes ago       Exited              kindnet-cni                 0                   1e2fb8d785215       kindnet-bb5xg
	c04cf172cf8b2       25a5233254979       8 minutes ago       Exited              kube-proxy                  0                   8175cdceebd06       kube-proxy-6cbbp
	33b3439c71834       e7605f88f17d6       8 minutes ago       Exited              kube-scheduler              0                   314461ec0b3b9       kube-scheduler-old-k8s-version-440919
	d51ff90b464a0       05b738aa1bc63       8 minutes ago       Exited              etcd                        0                   d002b4f966f63       etcd-old-k8s-version-440919
	eeba0dd857ff0       1df8a2b116bd1       8 minutes ago       Exited              kube-controller-manager     0                   0a107e00727d3       kube-controller-manager-old-k8s-version-440919
	93248ad0d3571       2c08bbbc02d3a       8 minutes ago       Exited              kube-apiserver              0                   96f8c9e24fbd7       kube-apiserver-old-k8s-version-440919
	
	
	==> containerd <==
	Jun 17 12:25:24 old-k8s-version-440919 containerd[569]: time="2024-06-17T12:25:24.447896010Z" level=info msg="CreateContainer within sandbox \"863cf2dba2c1b1f4846fbe23d2e28168874f53f04b0a2c16204087989a875324\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:4,} returns container id \"952a0ad90405094b34a5923fd97f5ad73ada16ae42cba2d667a482d26dcfd7db\""
	Jun 17 12:25:24 old-k8s-version-440919 containerd[569]: time="2024-06-17T12:25:24.448987974Z" level=info msg="StartContainer for \"952a0ad90405094b34a5923fd97f5ad73ada16ae42cba2d667a482d26dcfd7db\""
	Jun 17 12:25:24 old-k8s-version-440919 containerd[569]: time="2024-06-17T12:25:24.552639066Z" level=info msg="StartContainer for \"952a0ad90405094b34a5923fd97f5ad73ada16ae42cba2d667a482d26dcfd7db\" returns successfully"
	Jun 17 12:25:24 old-k8s-version-440919 containerd[569]: time="2024-06-17T12:25:24.577000020Z" level=info msg="shim disconnected" id=952a0ad90405094b34a5923fd97f5ad73ada16ae42cba2d667a482d26dcfd7db
	Jun 17 12:25:24 old-k8s-version-440919 containerd[569]: time="2024-06-17T12:25:24.577265483Z" level=warning msg="cleaning up after shim disconnected" id=952a0ad90405094b34a5923fd97f5ad73ada16ae42cba2d667a482d26dcfd7db namespace=k8s.io
	Jun 17 12:25:24 old-k8s-version-440919 containerd[569]: time="2024-06-17T12:25:24.577347377Z" level=info msg="cleaning up dead shim"
	Jun 17 12:25:24 old-k8s-version-440919 containerd[569]: time="2024-06-17T12:25:24.587816444Z" level=warning msg="cleanup warnings time=\"2024-06-17T12:25:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3024 runtime=io.containerd.runc.v2\n"
	Jun 17 12:25:25 old-k8s-version-440919 containerd[569]: time="2024-06-17T12:25:25.175519772Z" level=info msg="RemoveContainer for \"803c965d19c657d4fba37e4941f3b9ae1c5caff7ef7c6e18a88ae1d53094b452\""
	Jun 17 12:25:25 old-k8s-version-440919 containerd[569]: time="2024-06-17T12:25:25.182114978Z" level=info msg="RemoveContainer for \"803c965d19c657d4fba37e4941f3b9ae1c5caff7ef7c6e18a88ae1d53094b452\" returns successfully"
	Jun 17 12:26:15 old-k8s-version-440919 containerd[569]: time="2024-06-17T12:26:15.413343304Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jun 17 12:26:15 old-k8s-version-440919 containerd[569]: time="2024-06-17T12:26:15.418443870Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" host=fake.domain
	Jun 17 12:26:15 old-k8s-version-440919 containerd[569]: time="2024-06-17T12:26:15.420151550Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Jun 17 12:26:55 old-k8s-version-440919 containerd[569]: time="2024-06-17T12:26:55.414351626Z" level=info msg="CreateContainer within sandbox \"863cf2dba2c1b1f4846fbe23d2e28168874f53f04b0a2c16204087989a875324\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:5,}"
	Jun 17 12:26:55 old-k8s-version-440919 containerd[569]: time="2024-06-17T12:26:55.429442373Z" level=info msg="CreateContainer within sandbox \"863cf2dba2c1b1f4846fbe23d2e28168874f53f04b0a2c16204087989a875324\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:5,} returns container id \"792bab234ef0a545cd493e0b93be7bd0f953f73b81719225a348ca049e0fa58c\""
	Jun 17 12:26:55 old-k8s-version-440919 containerd[569]: time="2024-06-17T12:26:55.430292324Z" level=info msg="StartContainer for \"792bab234ef0a545cd493e0b93be7bd0f953f73b81719225a348ca049e0fa58c\""
	Jun 17 12:26:55 old-k8s-version-440919 containerd[569]: time="2024-06-17T12:26:55.495113194Z" level=info msg="StartContainer for \"792bab234ef0a545cd493e0b93be7bd0f953f73b81719225a348ca049e0fa58c\" returns successfully"
	Jun 17 12:26:55 old-k8s-version-440919 containerd[569]: time="2024-06-17T12:26:55.524201421Z" level=info msg="shim disconnected" id=792bab234ef0a545cd493e0b93be7bd0f953f73b81719225a348ca049e0fa58c
	Jun 17 12:26:55 old-k8s-version-440919 containerd[569]: time="2024-06-17T12:26:55.524261546Z" level=warning msg="cleaning up after shim disconnected" id=792bab234ef0a545cd493e0b93be7bd0f953f73b81719225a348ca049e0fa58c namespace=k8s.io
	Jun 17 12:26:55 old-k8s-version-440919 containerd[569]: time="2024-06-17T12:26:55.524274633Z" level=info msg="cleaning up dead shim"
	Jun 17 12:26:55 old-k8s-version-440919 containerd[569]: time="2024-06-17T12:26:55.537819232Z" level=warning msg="cleanup warnings time=\"2024-06-17T12:26:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3274 runtime=io.containerd.runc.v2\n"
	Jun 17 12:26:56 old-k8s-version-440919 containerd[569]: time="2024-06-17T12:26:56.390209090Z" level=info msg="RemoveContainer for \"952a0ad90405094b34a5923fd97f5ad73ada16ae42cba2d667a482d26dcfd7db\""
	Jun 17 12:26:56 old-k8s-version-440919 containerd[569]: time="2024-06-17T12:26:56.396189481Z" level=info msg="RemoveContainer for \"952a0ad90405094b34a5923fd97f5ad73ada16ae42cba2d667a482d26dcfd7db\" returns successfully"
	Jun 17 12:29:02 old-k8s-version-440919 containerd[569]: time="2024-06-17T12:29:02.413600275Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jun 17 12:29:02 old-k8s-version-440919 containerd[569]: time="2024-06-17T12:29:02.428843464Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" host=fake.domain
	Jun 17 12:29:02 old-k8s-version-440919 containerd[569]: time="2024-06-17T12:29:02.433743572Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	
	
	==> coredns [69a919aa5c14ae21855b7a11063b8873bc124cd369a2569e33734aa5edc4b63c] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 093a0bf1423dd8c4eee62372bb216168
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:52591 - 41086 "HINFO IN 5056348176530310187.679006933509091382. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.034045438s
	
	
	==> coredns [c28f4777836661aecd3fac7733ba84292508f8e09940637ec697a14b608e8b21] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 093a0bf1423dd8c4eee62372bb216168
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:47993 - 29431 "HINFO IN 1658307182434245020.4992625685084247683. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.034061877s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-440919
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-440919
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6835d8632d8e28da57a827eb12d7b852b17a9f6
	                    minikube.k8s.io/name=old-k8s-version-440919
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_17T12_20_41_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jun 2024 12:20:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-440919
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jun 2024 12:29:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jun 2024 12:24:13 +0000   Mon, 17 Jun 2024 12:20:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jun 2024 12:24:13 +0000   Mon, 17 Jun 2024 12:20:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jun 2024 12:24:13 +0000   Mon, 17 Jun 2024 12:20:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jun 2024 12:24:13 +0000   Mon, 17 Jun 2024 12:20:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-440919
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022356Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022356Ki
	  pods:               110
	System Info:
	  Machine ID:                 2ea19f8a2aea4f3e9aab9bc8cbdc3ac1
	  System UUID:                f53fc89d-9998-47dd-a663-02b591a53d22
	  Boot ID:                    10e5c427-da39-4514-92df-ee3f91ef093f
	  Kernel Version:             5.15.0-1063-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.33
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m35s
	  kube-system                 coredns-74ff55c5b-z5m4s                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     8m8s
	  kube-system                 etcd-old-k8s-version-440919                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         8m14s
	  kube-system                 kindnet-bb5xg                                     100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      8m8s
	  kube-system                 kube-apiserver-old-k8s-version-440919             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m14s
	  kube-system                 kube-controller-manager-old-k8s-version-440919    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m14s
	  kube-system                 kube-proxy-6cbbp                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m8s
	  kube-system                 kube-scheduler-old-k8s-version-440919             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m14s
	  kube-system                 metrics-server-9975d5f86-w7ck9                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (2%!)(MISSING)       0 (0%!)(MISSING)         6m24s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m6s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-6m7v7         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m23s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-jq8f7               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%!)(MISSING)  100m (5%!)(MISSING)
	  memory             420Mi (5%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  100Mi (0%!)(MISSING)  0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 8m34s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m34s (x4 over 8m34s)  kubelet     Node old-k8s-version-440919 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m34s (x5 over 8m34s)  kubelet     Node old-k8s-version-440919 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m34s (x4 over 8m34s)  kubelet     Node old-k8s-version-440919 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m34s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 8m15s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m14s                  kubelet     Node old-k8s-version-440919 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m14s                  kubelet     Node old-k8s-version-440919 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m14s                  kubelet     Node old-k8s-version-440919 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m14s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m8s                   kubelet     Node old-k8s-version-440919 status is now: NodeReady
	  Normal  Starting                 8m7s                   kube-proxy  Starting kube-proxy.
	  Normal  Starting                 5m55s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m55s (x8 over 5m55s)  kubelet     Node old-k8s-version-440919 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m55s (x8 over 5m55s)  kubelet     Node old-k8s-version-440919 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m55s (x7 over 5m55s)  kubelet     Node old-k8s-version-440919 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m55s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m39s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[  +0.001029] FS-Cache: O-key=[8] '0171ed0000000000'
	[  +0.000688] FS-Cache: N-cookie c=00000030 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000954] FS-Cache: N-cookie d=00000000f5be1c10{9p.inode} n=00000000c14890eb
	[  +0.001033] FS-Cache: N-key=[8] '0171ed0000000000'
	[  +0.002771] FS-Cache: Duplicate cookie detected
	[  +0.000717] FS-Cache: O-cookie c=0000002a [p=00000027 fl=226 nc=0 na=1]
	[  +0.000948] FS-Cache: O-cookie d=00000000f5be1c10{9p.inode} n=00000000dfffb993
	[  +0.001025] FS-Cache: O-key=[8] '0171ed0000000000'
	[  +0.000686] FS-Cache: N-cookie c=00000031 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000929] FS-Cache: N-cookie d=00000000f5be1c10{9p.inode} n=000000004c5d8b26
	[  +0.001022] FS-Cache: N-key=[8] '0171ed0000000000'
	[  +2.760532] FS-Cache: Duplicate cookie detected
	[  +0.000774] FS-Cache: O-cookie c=00000028 [p=00000027 fl=226 nc=0 na=1]
	[  +0.000928] FS-Cache: O-cookie d=00000000f5be1c10{9p.inode} n=00000000e03cf592
	[  +0.001057] FS-Cache: O-key=[8] '0071ed0000000000'
	[  +0.000714] FS-Cache: N-cookie c=00000033 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000909] FS-Cache: N-cookie d=00000000f5be1c10{9p.inode} n=00000000cac8d902
	[  +0.001025] FS-Cache: N-key=[8] '0071ed0000000000'
	[  +0.336358] FS-Cache: Duplicate cookie detected
	[  +0.000698] FS-Cache: O-cookie c=0000002d [p=00000027 fl=226 nc=0 na=1]
	[  +0.000954] FS-Cache: O-cookie d=00000000f5be1c10{9p.inode} n=00000000f0da4743
	[  +0.001024] FS-Cache: O-key=[8] '0671ed0000000000'
	[  +0.000690] FS-Cache: N-cookie c=00000034 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000920] FS-Cache: N-cookie d=00000000f5be1c10{9p.inode} n=00000000c14890eb
	[  +0.001040] FS-Cache: N-key=[8] '0671ed0000000000'
	
	
	==> etcd [5ae6ec941dc2fa5dddb29c886ab412ba333c98fb902b2443b1d6d06aa68dc55a] <==
	2024-06-17 12:24:55.600650 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-06-17 12:25:05.600745 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-06-17 12:25:15.600560 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-06-17 12:25:25.600523 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-06-17 12:25:35.600586 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-06-17 12:25:45.600597 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-06-17 12:25:55.600466 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-06-17 12:26:05.600627 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-06-17 12:26:15.600825 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-06-17 12:26:25.600683 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-06-17 12:26:35.601061 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-06-17 12:26:45.600644 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-06-17 12:26:55.600622 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-06-17 12:27:05.600652 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-06-17 12:27:15.600524 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-06-17 12:27:25.600548 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-06-17 12:27:35.600664 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-06-17 12:27:45.600505 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-06-17 12:27:55.600607 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-06-17 12:28:05.600516 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-06-17 12:28:15.600576 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-06-17 12:28:25.600536 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-06-17 12:28:35.600618 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-06-17 12:28:45.600540 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-06-17 12:28:55.600549 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> etcd [d51ff90b464a07bbc7028e9d8452454acff9356047d6f3ccfd9837fad4344e0d] <==
	raft2024/06/17 12:20:31 INFO: 9f0758e1c58a86ed is starting a new election at term 1
	raft2024/06/17 12:20:31 INFO: 9f0758e1c58a86ed became candidate at term 2
	raft2024/06/17 12:20:31 INFO: 9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2
	raft2024/06/17 12:20:31 INFO: 9f0758e1c58a86ed became leader at term 2
	raft2024/06/17 12:20:31 INFO: raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2
	2024-06-17 12:20:31.345935 I | etcdserver: setting up the initial cluster version to 3.4
	2024-06-17 12:20:31.348518 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-06-17 12:20:31.348687 I | etcdserver/api: enabled capabilities for version 3.4
	2024-06-17 12:20:31.348783 I | etcdserver: published {Name:old-k8s-version-440919 ClientURLs:[https://192.168.85.2:2379]} to cluster 68eaea490fab4e05
	2024-06-17 12:20:31.349036 I | embed: ready to serve client requests
	2024-06-17 12:20:31.349145 I | embed: ready to serve client requests
	2024-06-17 12:20:31.350445 I | embed: serving client requests on 127.0.0.1:2379
	2024-06-17 12:20:31.353842 I | embed: serving client requests on 192.168.85.2:2379
	2024-06-17 12:20:40.015902 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-06-17 12:20:57.963780 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-06-17 12:20:59.860539 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-06-17 12:21:09.860653 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-06-17 12:21:19.860497 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-06-17 12:21:29.860508 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-06-17 12:21:39.860589 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-06-17 12:21:49.860748 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-06-17 12:21:59.860560 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-06-17 12:22:09.860662 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-06-17 12:22:19.860777 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-06-17 12:22:29.860687 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 12:29:04 up  4:11,  0 users,  load average: 1.55, 2.58, 2.95
	Linux old-k8s-version-440919 5.15.0-1063-aws #69~20.04.1-Ubuntu SMP Fri May 10 19:21:30 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [8677bbcb8def4ca7d9e6734637430dbad8bd9a8def059c11b56ed4db17c04599] <==
	I0617 12:26:54.404059       1 main.go:227] handling current node
	I0617 12:27:04.424598       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0617 12:27:04.424625       1 main.go:227] handling current node
	I0617 12:27:14.432923       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0617 12:27:14.432950       1 main.go:227] handling current node
	I0617 12:27:24.443595       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0617 12:27:24.443624       1 main.go:227] handling current node
	I0617 12:27:34.460749       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0617 12:27:34.460780       1 main.go:227] handling current node
	I0617 12:27:44.472337       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0617 12:27:44.472363       1 main.go:227] handling current node
	I0617 12:27:54.482626       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0617 12:27:54.482657       1 main.go:227] handling current node
	I0617 12:28:04.500513       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0617 12:28:04.500545       1 main.go:227] handling current node
	I0617 12:28:14.511713       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0617 12:28:14.511744       1 main.go:227] handling current node
	I0617 12:28:24.520046       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0617 12:28:24.520078       1 main.go:227] handling current node
	I0617 12:28:34.535607       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0617 12:28:34.535711       1 main.go:227] handling current node
	I0617 12:28:44.552125       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0617 12:28:44.552155       1 main.go:227] handling current node
	I0617 12:28:54.569449       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0617 12:28:54.569481       1 main.go:227] handling current node
	
	
	==> kindnet [f315a7ec7452781d6412306a4aa3e993b47b29eaccf0203b0834fc2aeb329242] <==
	I0617 12:20:58.530600       1 main.go:116] setting mtu 1500 for CNI 
	I0617 12:20:58.530619       1 main.go:146] kindnetd IP family: "ipv4"
	I0617 12:20:58.530630       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0617 12:20:58.834075       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0617 12:20:58.927735       1 main.go:227] handling current node
	I0617 12:21:08.948677       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0617 12:21:08.948704       1 main.go:227] handling current node
	I0617 12:21:18.964635       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0617 12:21:18.964667       1 main.go:227] handling current node
	I0617 12:21:28.975490       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0617 12:21:28.975518       1 main.go:227] handling current node
	I0617 12:21:38.982902       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0617 12:21:38.982929       1 main.go:227] handling current node
	I0617 12:21:49.000253       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0617 12:21:49.000282       1 main.go:227] handling current node
	I0617 12:21:59.006291       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0617 12:21:59.006320       1 main.go:227] handling current node
	I0617 12:22:09.022730       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0617 12:22:09.022765       1 main.go:227] handling current node
	I0617 12:22:19.037668       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0617 12:22:19.037697       1 main.go:227] handling current node
	I0617 12:22:29.042145       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0617 12:22:29.042173       1 main.go:227] handling current node
	I0617 12:22:39.059169       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0617 12:22:39.059200       1 main.go:227] handling current node
	
	
	==> kube-apiserver [4c8b05cc53338f5377be14688172756dd02f7989ad55761a58b0f4c2aa83b6c5] <==
	I0617 12:25:29.662504       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0617 12:25:29.662515       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0617 12:26:05.924653       1 client.go:360] parsed scheme: "passthrough"
	I0617 12:26:05.924702       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0617 12:26:05.924711       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0617 12:26:24.069591       1 handler_proxy.go:102] no RequestInfo found in the context
	E0617 12:26:24.069712       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0617 12:26:24.069761       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0617 12:26:41.264861       1 client.go:360] parsed scheme: "passthrough"
	I0617 12:26:41.264903       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0617 12:26:41.264911       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0617 12:27:13.932466       1 client.go:360] parsed scheme: "passthrough"
	I0617 12:27:13.932519       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0617 12:27:13.932550       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0617 12:27:58.756538       1 client.go:360] parsed scheme: "passthrough"
	I0617 12:27:58.756591       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0617 12:27:58.756599       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0617 12:28:21.599906       1 handler_proxy.go:102] no RequestInfo found in the context
	E0617 12:28:21.599985       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0617 12:28:21.599999       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0617 12:28:29.356541       1 client.go:360] parsed scheme: "passthrough"
	I0617 12:28:29.356588       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0617 12:28:29.356623       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-apiserver [93248ad0d3571533f5f55b355330f125d3756357d5377ca4b5685d1541bf3ea8] <==
	I0617 12:20:38.219102       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0617 12:20:38.226794       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0617 12:20:38.231236       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0617 12:20:38.231267       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0617 12:20:38.709072       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0617 12:20:38.745693       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0617 12:20:38.869487       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I0617 12:20:38.870785       1 controller.go:606] quota admission added evaluator for: endpoints
	I0617 12:20:38.874839       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0617 12:20:39.856322       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0617 12:20:40.506762       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0617 12:20:40.598026       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0617 12:20:49.029944       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0617 12:20:55.798812       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I0617 12:20:55.830426       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I0617 12:21:06.969371       1 client.go:360] parsed scheme: "passthrough"
	I0617 12:21:06.969430       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0617 12:21:06.969439       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0617 12:21:39.663834       1 client.go:360] parsed scheme: "passthrough"
	I0617 12:21:39.663876       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0617 12:21:39.663908       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0617 12:22:10.482767       1 client.go:360] parsed scheme: "passthrough"
	I0617 12:22:10.482811       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0617 12:22:10.482821       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	E0617 12:22:38.828025       1 available_controller.go:508] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again
	
	
	==> kube-controller-manager [5d81177822de55491501d88ceb395813919c8619721c616af1c12bcc910eab24] <==
	E0617 12:24:41.373886       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0617 12:24:44.981124       1 request.go:655] Throttling request took 1.048412995s, request: GET:https://192.168.85.2:8443/apis/batch/v1?timeout=32s
	W0617 12:24:45.833152       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0617 12:25:11.900295       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0617 12:25:17.483706       1 request.go:655] Throttling request took 1.048557386s, request: GET:https://192.168.85.2:8443/apis/scheduling.k8s.io/v1beta1?timeout=32s
	W0617 12:25:18.335209       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0617 12:25:42.402717       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0617 12:25:49.985864       1 request.go:655] Throttling request took 1.048577623s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0617 12:25:50.837215       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0617 12:26:12.904552       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0617 12:26:22.487741       1 request.go:655] Throttling request took 1.048373241s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0617 12:26:23.339186       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0617 12:26:43.406333       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0617 12:26:54.989794       1 request.go:655] Throttling request took 1.048399657s, request: GET:https://192.168.85.2:8443/apis/admissionregistration.k8s.io/v1beta1?timeout=32s
	W0617 12:26:55.841137       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0617 12:27:13.908286       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0617 12:27:27.491529       1 request.go:655] Throttling request took 1.048517385s, request: GET:https://192.168.85.2:8443/apis/certificates.k8s.io/v1?timeout=32s
	W0617 12:27:28.343106       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0617 12:27:44.419146       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0617 12:27:59.993769       1 request.go:655] Throttling request took 1.048445782s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0617 12:28:00.845208       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0617 12:28:14.920918       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0617 12:28:32.496679       1 request.go:655] Throttling request took 1.048400115s, request: GET:https://192.168.85.2:8443/apis/events.k8s.io/v1?timeout=32s
	W0617 12:28:33.348076       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0617 12:28:45.422889       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	
	==> kube-controller-manager [eeba0dd857ff09e5969b3bcf37ad51875f7624dbde1d304a3dbbade1d64fbf06] <==
	W0617 12:20:55.961217       1 node_lifecycle_controller.go:1044] Missing timestamp for Node old-k8s-version-440919. Assuming now as a timestamp.
	I0617 12:20:55.961308       1 node_lifecycle_controller.go:1195] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0617 12:20:55.961629       1 taint_manager.go:187] Starting NoExecuteTaintManager
	I0617 12:20:55.962266       1 event.go:291] "Event occurred" object="old-k8s-version-440919" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node old-k8s-version-440919 event: Registered Node old-k8s-version-440919 in Controller"
	E0617 12:20:55.973028       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	E0617 12:20:55.974456       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"39ab1470-b00b-4271-b0a3-b7c4c14e905f", ResourceVersion:"275", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63854223640, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x400043ce60), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x400043cee0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.
LabelSelector)(0x400043cf20), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Gl
usterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x40018ffec0), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x400043c
f40), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeS
ource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x400043cf80), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil),
AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.20.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x400043cfc0)}}, Resources:v1.R
esourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x400033d320), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPo
licy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4000d40e28), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x4000a681c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), Runtime
ClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x40007505c0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4000d40e78)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I0617 12:20:55.974589       1 shared_informer.go:247] Caches are synced for resource quota 
	I0617 12:20:56.017383       1 shared_informer.go:247] Caches are synced for expand 
	I0617 12:20:56.035631       1 shared_informer.go:247] Caches are synced for PV protection 
	I0617 12:20:56.037586       1 event.go:291] "Event occurred" object="kube-system/kube-apiserver-old-k8s-version-440919" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0617 12:20:56.082960       1 shared_informer.go:247] Caches are synced for attach detach 
	E0617 12:20:56.089958       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	E0617 12:20:56.146112       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"39ab1470-b00b-4271-b0a3-b7c4c14e905f", ResourceVersion:"398", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63854223640, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001d062c0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001d062e0)}, v1.ManagedFieldsEntry{Manager:"kube-co
ntroller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001d06300), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001d06320)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4001d06340), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElastic
BlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x4001cc3fc0), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSour
ce)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001d06360), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSo
urce)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001d06380), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil),
Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.20.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil),
WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001d063c0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"F
ile", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4001bdbb00), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4001d00818), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x4000a6bb90), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)
(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x4000384ce8)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4001d00868)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:0, ObservedGeneration:1, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest ve
rsion and try again
	I0617 12:20:56.200770       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0617 12:20:56.443855       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0617 12:20:56.443890       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0617 12:20:56.500997       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0617 12:20:56.809374       1 request.go:655] Throttling request took 1.044458823s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	I0617 12:20:57.513103       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0617 12:20:57.532820       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-2fm2r"
	I0617 12:20:57.611345       1 shared_informer.go:240] Waiting for caches to sync for resource quota
	I0617 12:20:57.611806       1 shared_informer.go:247] Caches are synced for resource quota 
	I0617 12:21:00.961593       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0617 12:22:38.641750       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	E0617 12:22:38.712173       1 clusterroleaggregation_controller.go:181] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
	
	
	==> kube-proxy [c04cf172cf8b22a2b419169621cc8909b6cc7170c752536e945bf6b4b01bcca0] <==
	I0617 12:20:56.751934       1 node.go:172] Successfully retrieved node IP: 192.168.85.2
	I0617 12:20:56.752083       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
	W0617 12:20:56.787728       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0617 12:20:56.787837       1 server_others.go:185] Using iptables Proxier.
	I0617 12:20:56.788052       1 server.go:650] Version: v1.20.0
	I0617 12:20:56.788828       1 config.go:224] Starting endpoint slice config controller
	I0617 12:20:56.788841       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0617 12:20:56.788946       1 config.go:315] Starting service config controller
	I0617 12:20:56.788951       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0617 12:20:56.888973       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0617 12:20:56.889143       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-proxy [c39d788e9ddc5e61548640b61f4e70b4d0b0f5c9ffc4fa37603cf44274abcb4b] <==
	I0617 12:23:24.144221       1 node.go:172] Successfully retrieved node IP: 192.168.85.2
	I0617 12:23:24.144294       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
	W0617 12:23:24.175215       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0617 12:23:24.176671       1 server_others.go:185] Using iptables Proxier.
	I0617 12:23:24.177390       1 server.go:650] Version: v1.20.0
	I0617 12:23:24.178142       1 config.go:315] Starting service config controller
	I0617 12:23:24.178298       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0617 12:23:24.178437       1 config.go:224] Starting endpoint slice config controller
	I0617 12:23:24.178513       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0617 12:23:24.278528       1 shared_informer.go:247] Caches are synced for service config 
	I0617 12:23:24.278747       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-scheduler [33b3439c7183458694ca56c7f6e9260911d8c14a7c86c0469b8dc03f505deb0f] <==
	W0617 12:20:37.366665       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0617 12:20:37.366670       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0617 12:20:37.438458       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0617 12:20:37.439045       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0617 12:20:37.439070       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0617 12:20:37.439086       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0617 12:20:37.448781       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0617 12:20:37.449059       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0617 12:20:37.449261       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0617 12:20:37.450984       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0617 12:20:37.451271       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0617 12:20:37.451484       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0617 12:20:37.451709       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0617 12:20:37.451962       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0617 12:20:37.454134       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0617 12:20:37.454582       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0617 12:20:37.454821       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0617 12:20:37.455839       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0617 12:20:38.293092       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0617 12:20:38.308557       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0617 12:20:38.353086       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0617 12:20:38.376104       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0617 12:20:38.456458       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0617 12:20:38.537758       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0617 12:20:39.039233       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kube-scheduler [5ea33e0403f502a14bee51b367aa2ab21c3f1c2e8def56f7e25073809a96c24e] <==
	I0617 12:23:16.592300       1 serving.go:331] Generated self-signed cert in-memory
	W0617 12:23:20.498850       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0617 12:23:20.499105       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0617 12:23:20.499213       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0617 12:23:20.499301       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0617 12:23:20.731078       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0617 12:23:20.742175       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0617 12:23:20.742200       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0617 12:23:20.742218       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0617 12:23:20.943009       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Jun 17 12:27:32 old-k8s-version-440919 kubelet[665]: E0617 12:27:32.413007     665 pod_workers.go:191] Error syncing pod b1829b9f-9401-4c9b-a032-45c1d46ad46f ("metrics-server-9975d5f86-w7ck9_kube-system(b1829b9f-9401-4c9b-a032-45c1d46ad46f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jun 17 12:27:41 old-k8s-version-440919 kubelet[665]: I0617 12:27:41.412241     665 scope.go:95] [topologymanager] RemoveContainer - Container ID: 792bab234ef0a545cd493e0b93be7bd0f953f73b81719225a348ca049e0fa58c
	Jun 17 12:27:41 old-k8s-version-440919 kubelet[665]: E0617 12:27:41.413008     665 pod_workers.go:191] Error syncing pod 32bd42a3-275b-46d8-9681-6824dd924a9e ("dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"
	Jun 17 12:27:44 old-k8s-version-440919 kubelet[665]: E0617 12:27:44.412980     665 pod_workers.go:191] Error syncing pod b1829b9f-9401-4c9b-a032-45c1d46ad46f ("metrics-server-9975d5f86-w7ck9_kube-system(b1829b9f-9401-4c9b-a032-45c1d46ad46f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jun 17 12:27:55 old-k8s-version-440919 kubelet[665]: E0617 12:27:55.412933     665 pod_workers.go:191] Error syncing pod b1829b9f-9401-4c9b-a032-45c1d46ad46f ("metrics-server-9975d5f86-w7ck9_kube-system(b1829b9f-9401-4c9b-a032-45c1d46ad46f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jun 17 12:27:56 old-k8s-version-440919 kubelet[665]: I0617 12:27:56.412330     665 scope.go:95] [topologymanager] RemoveContainer - Container ID: 792bab234ef0a545cd493e0b93be7bd0f953f73b81719225a348ca049e0fa58c
	Jun 17 12:27:56 old-k8s-version-440919 kubelet[665]: E0617 12:27:56.415878     665 pod_workers.go:191] Error syncing pod 32bd42a3-275b-46d8-9681-6824dd924a9e ("dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"
	Jun 17 12:28:07 old-k8s-version-440919 kubelet[665]: I0617 12:28:07.412227     665 scope.go:95] [topologymanager] RemoveContainer - Container ID: 792bab234ef0a545cd493e0b93be7bd0f953f73b81719225a348ca049e0fa58c
	Jun 17 12:28:07 old-k8s-version-440919 kubelet[665]: E0617 12:28:07.412568     665 pod_workers.go:191] Error syncing pod 32bd42a3-275b-46d8-9681-6824dd924a9e ("dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"
	Jun 17 12:28:09 old-k8s-version-440919 kubelet[665]: E0617 12:28:09.413101     665 pod_workers.go:191] Error syncing pod b1829b9f-9401-4c9b-a032-45c1d46ad46f ("metrics-server-9975d5f86-w7ck9_kube-system(b1829b9f-9401-4c9b-a032-45c1d46ad46f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jun 17 12:28:21 old-k8s-version-440919 kubelet[665]: I0617 12:28:21.412213     665 scope.go:95] [topologymanager] RemoveContainer - Container ID: 792bab234ef0a545cd493e0b93be7bd0f953f73b81719225a348ca049e0fa58c
	Jun 17 12:28:21 old-k8s-version-440919 kubelet[665]: E0617 12:28:21.412553     665 pod_workers.go:191] Error syncing pod 32bd42a3-275b-46d8-9681-6824dd924a9e ("dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"
	Jun 17 12:28:22 old-k8s-version-440919 kubelet[665]: E0617 12:28:22.412972     665 pod_workers.go:191] Error syncing pod b1829b9f-9401-4c9b-a032-45c1d46ad46f ("metrics-server-9975d5f86-w7ck9_kube-system(b1829b9f-9401-4c9b-a032-45c1d46ad46f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jun 17 12:28:32 old-k8s-version-440919 kubelet[665]: I0617 12:28:32.412228     665 scope.go:95] [topologymanager] RemoveContainer - Container ID: 792bab234ef0a545cd493e0b93be7bd0f953f73b81719225a348ca049e0fa58c
	Jun 17 12:28:32 old-k8s-version-440919 kubelet[665]: E0617 12:28:32.412994     665 pod_workers.go:191] Error syncing pod 32bd42a3-275b-46d8-9681-6824dd924a9e ("dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"
	Jun 17 12:28:37 old-k8s-version-440919 kubelet[665]: E0617 12:28:37.413026     665 pod_workers.go:191] Error syncing pod b1829b9f-9401-4c9b-a032-45c1d46ad46f ("metrics-server-9975d5f86-w7ck9_kube-system(b1829b9f-9401-4c9b-a032-45c1d46ad46f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jun 17 12:28:46 old-k8s-version-440919 kubelet[665]: I0617 12:28:46.412337     665 scope.go:95] [topologymanager] RemoveContainer - Container ID: 792bab234ef0a545cd493e0b93be7bd0f953f73b81719225a348ca049e0fa58c
	Jun 17 12:28:46 old-k8s-version-440919 kubelet[665]: E0617 12:28:46.412739     665 pod_workers.go:191] Error syncing pod 32bd42a3-275b-46d8-9681-6824dd924a9e ("dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"
	Jun 17 12:28:48 old-k8s-version-440919 kubelet[665]: E0617 12:28:48.413238     665 pod_workers.go:191] Error syncing pod b1829b9f-9401-4c9b-a032-45c1d46ad46f ("metrics-server-9975d5f86-w7ck9_kube-system(b1829b9f-9401-4c9b-a032-45c1d46ad46f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jun 17 12:28:59 old-k8s-version-440919 kubelet[665]: I0617 12:28:59.412194     665 scope.go:95] [topologymanager] RemoveContainer - Container ID: 792bab234ef0a545cd493e0b93be7bd0f953f73b81719225a348ca049e0fa58c
	Jun 17 12:28:59 old-k8s-version-440919 kubelet[665]: E0617 12:28:59.412554     665 pod_workers.go:191] Error syncing pod 32bd42a3-275b-46d8-9681-6824dd924a9e ("dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6m7v7_kubernetes-dashboard(32bd42a3-275b-46d8-9681-6824dd924a9e)"
	Jun 17 12:29:02 old-k8s-version-440919 kubelet[665]: E0617 12:29:02.436451     665 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host
	Jun 17 12:29:02 old-k8s-version-440919 kubelet[665]: E0617 12:29:02.437324     665 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host
	Jun 17 12:29:02 old-k8s-version-440919 kubelet[665]: E0617 12:29:02.437670     665 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-jlld8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exec
:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-w7ck9_kube-system(b1829b9
f-9401-4c9b-a032-45c1d46ad46f): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host
	Jun 17 12:29:02 old-k8s-version-440919 kubelet[665]: E0617 12:29:02.438105     665 pod_workers.go:191] Error syncing pod b1829b9f-9401-4c9b-a032-45c1d46ad46f ("metrics-server-9975d5f86-w7ck9_kube-system(b1829b9f-9401-4c9b-a032-45c1d46ad46f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	
	
	==> kubernetes-dashboard [362a01bad45f4e4081cbbf8ee77edf080e86ad9cd464e744534abcc40393504f] <==
	2024/06/17 12:23:47 Using namespace: kubernetes-dashboard
	2024/06/17 12:23:47 Using in-cluster config to connect to apiserver
	2024/06/17 12:23:47 Using secret token for csrf signing
	2024/06/17 12:23:47 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/06/17 12:23:47 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/06/17 12:23:47 Successful initial request to the apiserver, version: v1.20.0
	2024/06/17 12:23:47 Generating JWE encryption key
	2024/06/17 12:23:47 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/06/17 12:23:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/06/17 12:23:47 Initializing JWE encryption key from synchronized object
	2024/06/17 12:23:47 Creating in-cluster Sidecar client
	2024/06/17 12:23:47 Serving insecurely on HTTP port: 9090
	2024/06/17 12:23:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/06/17 12:24:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/06/17 12:24:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/06/17 12:25:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/06/17 12:25:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/06/17 12:26:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/06/17 12:26:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/06/17 12:27:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/06/17 12:27:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/06/17 12:28:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/06/17 12:28:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/06/17 12:23:47 Starting overwatch
	
	
	==> storage-provisioner [06a68f0d2052e9b0ee97c609a7bc16438fa8b8ff0a43a0157e802993142b7e0d] <==
	I0617 12:24:08.566386       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0617 12:24:08.589685       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0617 12:24:08.589820       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0617 12:24:26.058201       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0617 12:24:26.063706       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-440919_6f025d28-f3f1-4288-8810-5228bef464e3!
	I0617 12:24:26.064428       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"61a95ec3-a1c4-4e18-aaf2-1d1aa9856714", APIVersion:"v1", ResourceVersion:"861", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-440919_6f025d28-f3f1-4288-8810-5228bef464e3 became leader
	I0617 12:24:26.164760       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-440919_6f025d28-f3f1-4288-8810-5228bef464e3!
	
	
	==> storage-provisioner [2ece5f594edf5341f8da2c11dc5de2fba23ee9c0c50f9b2a45fcb718aef5173f] <==
	I0617 12:23:23.900981       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0617 12:23:53.906480       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-440919 -n old-k8s-version-440919
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-440919 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-w7ck9
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-440919 describe pod metrics-server-9975d5f86-w7ck9
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-440919 describe pod metrics-server-9975d5f86-w7ck9: exit status 1 (99.981899ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-9975d5f86-w7ck9" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-440919 describe pod metrics-server-9975d5f86-w7ck9: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (373.63s)

                                                
                                    

Test pass (293/328)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 9.82
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.2
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.30.1/json-events 9.44
13 TestDownloadOnly/v1.30.1/preload-exists 0
17 TestDownloadOnly/v1.30.1/LogsDuration 0.07
18 TestDownloadOnly/v1.30.1/DeleteAll 0.2
19 TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.54
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.09
27 TestAddons/Setup 154.11
29 TestAddons/parallel/Registry 15.65
31 TestAddons/parallel/InspektorGadget 10.82
32 TestAddons/parallel/MetricsServer 5.76
35 TestAddons/parallel/CSI 51.55
36 TestAddons/parallel/Headlamp 11.08
37 TestAddons/parallel/CloudSpanner 6.62
38 TestAddons/parallel/LocalPath 51.83
39 TestAddons/parallel/NvidiaDevicePlugin 5.6
40 TestAddons/parallel/Yakd 6.01
41 TestAddons/parallel/Volcano 162.66
44 TestAddons/serial/GCPAuth/Namespaces 0.16
45 TestAddons/StoppedEnableDisable 12.17
46 TestCertOptions 37.03
47 TestCertExpiration 230.35
49 TestForceSystemdFlag 38.8
50 TestForceSystemdEnv 42.22
51 TestDockerEnvContainerd 46.62
56 TestErrorSpam/setup 29.19
57 TestErrorSpam/start 0.69
58 TestErrorSpam/status 0.95
59 TestErrorSpam/pause 1.63
60 TestErrorSpam/unpause 1.74
61 TestErrorSpam/stop 1.39
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 61.06
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 5.9
68 TestFunctional/serial/KubeContext 0.06
69 TestFunctional/serial/KubectlGetPods 0.09
72 TestFunctional/serial/CacheCmd/cache/add_remote 4.12
73 TestFunctional/serial/CacheCmd/cache/add_local 1.41
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
75 TestFunctional/serial/CacheCmd/cache/list 0.05
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.28
77 TestFunctional/serial/CacheCmd/cache/cache_reload 2.08
78 TestFunctional/serial/CacheCmd/cache/delete 0.11
79 TestFunctional/serial/MinikubeKubectlCmd 0.13
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
81 TestFunctional/serial/ExtraConfig 46.08
82 TestFunctional/serial/ComponentHealth 0.11
83 TestFunctional/serial/LogsCmd 1.74
84 TestFunctional/serial/LogsFileCmd 1.86
85 TestFunctional/serial/InvalidService 4.42
87 TestFunctional/parallel/ConfigCmd 0.43
88 TestFunctional/parallel/DashboardCmd 8.56
89 TestFunctional/parallel/DryRun 0.48
90 TestFunctional/parallel/InternationalLanguage 0.2
91 TestFunctional/parallel/StatusCmd 1.27
95 TestFunctional/parallel/ServiceCmdConnect 8.64
96 TestFunctional/parallel/AddonsCmd 0.21
97 TestFunctional/parallel/PersistentVolumeClaim 30
99 TestFunctional/parallel/SSHCmd 0.68
100 TestFunctional/parallel/CpCmd 2.01
102 TestFunctional/parallel/FileSync 0.33
103 TestFunctional/parallel/CertSync 1.94
107 TestFunctional/parallel/NodeLabels 0.08
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.66
111 TestFunctional/parallel/License 0.31
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.57
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.31
117 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.21
118 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
122 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
123 TestFunctional/parallel/ServiceCmd/DeployApp 8.24
124 TestFunctional/parallel/ServiceCmd/List 0.5
125 TestFunctional/parallel/ServiceCmd/JSONOutput 0.51
126 TestFunctional/parallel/ServiceCmd/HTTPS 0.37
127 TestFunctional/parallel/ServiceCmd/Format 0.37
128 TestFunctional/parallel/ServiceCmd/URL 0.36
129 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
130 TestFunctional/parallel/ProfileCmd/profile_list 0.39
131 TestFunctional/parallel/ProfileCmd/profile_json_output 0.36
132 TestFunctional/parallel/MountCmd/any-port 7.14
133 TestFunctional/parallel/MountCmd/specific-port 2.45
134 TestFunctional/parallel/MountCmd/VerifyCleanup 2.34
135 TestFunctional/parallel/Version/short 0.09
136 TestFunctional/parallel/Version/components 1.27
137 TestFunctional/parallel/ImageCommands/ImageListShort 0.25
138 TestFunctional/parallel/ImageCommands/ImageListTable 0.32
139 TestFunctional/parallel/ImageCommands/ImageListJson 0.32
140 TestFunctional/parallel/ImageCommands/ImageListYaml 0.31
141 TestFunctional/parallel/ImageCommands/ImageBuild 2.6
142 TestFunctional/parallel/ImageCommands/Setup 1.74
144 TestFunctional/parallel/UpdateContextCmd/no_changes 0.19
145 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.19
146 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.23
150 TestFunctional/parallel/ImageCommands/ImageRemove 0.46
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.61
153 TestFunctional/delete_addon-resizer_images 0.08
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.01
159 TestMultiControlPlane/serial/StartCluster 127.14
160 TestMultiControlPlane/serial/DeployApp 20.04
161 TestMultiControlPlane/serial/PingHostFromPods 1.64
162 TestMultiControlPlane/serial/AddWorkerNode 22.5
163 TestMultiControlPlane/serial/NodeLabels 0.11
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.8
165 TestMultiControlPlane/serial/CopyFile 19.04
166 TestMultiControlPlane/serial/StopSecondaryNode 12.86
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.56
168 TestMultiControlPlane/serial/RestartSecondaryNode 18.41
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.83
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 140.1
171 TestMultiControlPlane/serial/DeleteSecondaryNode 11.89
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.5
173 TestMultiControlPlane/serial/StopCluster 36.01
174 TestMultiControlPlane/serial/RestartCluster 65.63
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.53
176 TestMultiControlPlane/serial/AddSecondaryNode 44.76
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.74
181 TestJSONOutput/start/Command 59.86
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.72
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.65
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 5.76
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.22
206 TestKicCustomNetwork/create_custom_network 42.82
207 TestKicCustomNetwork/use_default_bridge_network 35.5
208 TestKicExistingNetwork 35.86
209 TestKicCustomSubnet 33.56
210 TestKicStaticIP 38.81
211 TestMainNoArgs 0.05
212 TestMinikubeProfile 69.25
215 TestMountStart/serial/StartWithMountFirst 6.23
216 TestMountStart/serial/VerifyMountFirst 0.26
217 TestMountStart/serial/StartWithMountSecond 6.41
218 TestMountStart/serial/VerifyMountSecond 0.26
219 TestMountStart/serial/DeleteFirst 1.65
220 TestMountStart/serial/VerifyMountPostDelete 0.24
221 TestMountStart/serial/Stop 1.19
222 TestMountStart/serial/RestartStopped 8.3
223 TestMountStart/serial/VerifyMountPostStop 0.28
226 TestMultiNode/serial/FreshStart2Nodes 71.76
227 TestMultiNode/serial/DeployApp2Nodes 4.57
228 TestMultiNode/serial/PingHostFrom2Pods 0.95
229 TestMultiNode/serial/AddNode 16.79
230 TestMultiNode/serial/MultiNodeLabels 0.09
231 TestMultiNode/serial/ProfileList 0.33
232 TestMultiNode/serial/CopyFile 9.77
233 TestMultiNode/serial/StopNode 2.24
234 TestMultiNode/serial/StartAfterStop 10.28
235 TestMultiNode/serial/RestartKeepsNodes 86.96
236 TestMultiNode/serial/DeleteNode 5.44
237 TestMultiNode/serial/StopMultiNode 23.99
238 TestMultiNode/serial/RestartMultiNode 51.73
239 TestMultiNode/serial/ValidateNameConflict 37.69
244 TestPreload 106.14
246 TestScheduledStopUnix 108.54
249 TestInsufficientStorage 10.5
250 TestRunningBinaryUpgrade 99.06
252 TestKubernetesUpgrade 358.56
253 TestMissingContainerUpgrade 164.17
255 TestPause/serial/Start 72.42
257 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
258 TestNoKubernetes/serial/StartWithK8s 36.62
259 TestNoKubernetes/serial/StartWithStopK8s 7.79
260 TestNoKubernetes/serial/Start 5.76
261 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
262 TestNoKubernetes/serial/ProfileList 1.13
263 TestNoKubernetes/serial/Stop 1.23
264 TestNoKubernetes/serial/StartNoArgs 6.86
265 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
266 TestPause/serial/SecondStartNoReconfiguration 7.81
267 TestPause/serial/Pause 0.94
268 TestPause/serial/VerifyStatus 0.48
269 TestPause/serial/Unpause 0.86
270 TestPause/serial/PauseAgain 1.12
271 TestPause/serial/DeletePaused 3.7
272 TestPause/serial/VerifyDeletedResources 0.15
273 TestStoppedBinaryUpgrade/Setup 1.45
274 TestStoppedBinaryUpgrade/Upgrade 118.68
275 TestStoppedBinaryUpgrade/MinikubeLogs 1.28
290 TestNetworkPlugins/group/false 5.32
295 TestStartStop/group/old-k8s-version/serial/FirstStart 151.83
296 TestStartStop/group/old-k8s-version/serial/DeployApp 9.56
297 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.04
298 TestStartStop/group/old-k8s-version/serial/Stop 12.65
300 TestStartStop/group/no-preload/serial/FirstStart 74.89
301 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.47
303 TestStartStop/group/no-preload/serial/DeployApp 8.36
304 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.2
305 TestStartStop/group/no-preload/serial/Stop 12.08
306 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
307 TestStartStop/group/no-preload/serial/SecondStart 289.58
308 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
309 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
310 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
311 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.27
312 TestStartStop/group/old-k8s-version/serial/Pause 2.87
313 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.17
315 TestStartStop/group/embed-certs/serial/FirstStart 65.76
316 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.29
317 TestStartStop/group/no-preload/serial/Pause 4.18
319 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 63.45
320 TestStartStop/group/embed-certs/serial/DeployApp 8.38
321 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.33
322 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.43
323 TestStartStop/group/embed-certs/serial/Stop 12.13
324 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.07
325 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.07
326 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
327 TestStartStop/group/embed-certs/serial/SecondStart 278.13
328 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.35
329 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 269.23
330 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
331 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
332 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
333 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.12
334 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
335 TestStartStop/group/embed-certs/serial/Pause 3.81
336 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.3
337 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.5
339 TestStartStop/group/newest-cni/serial/FirstStart 55.65
340 TestNetworkPlugins/group/auto/Start 73.4
341 TestStartStop/group/newest-cni/serial/DeployApp 0
342 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.23
343 TestStartStop/group/newest-cni/serial/Stop 1.36
344 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.27
345 TestStartStop/group/newest-cni/serial/SecondStart 17.15
346 TestNetworkPlugins/group/auto/KubeletFlags 0.45
347 TestNetworkPlugins/group/auto/NetCatPod 12.45
348 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
349 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
350 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.31
351 TestStartStop/group/newest-cni/serial/Pause 3.46
352 TestNetworkPlugins/group/kindnet/Start 62.89
353 TestNetworkPlugins/group/auto/DNS 0.22
354 TestNetworkPlugins/group/auto/Localhost 0.15
355 TestNetworkPlugins/group/auto/HairPin 0.16
356 TestNetworkPlugins/group/calico/Start 67.81
357 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
358 TestNetworkPlugins/group/kindnet/KubeletFlags 0.33
359 TestNetworkPlugins/group/kindnet/NetCatPod 9.31
360 TestNetworkPlugins/group/kindnet/DNS 0.21
361 TestNetworkPlugins/group/kindnet/Localhost 0.21
362 TestNetworkPlugins/group/kindnet/HairPin 0.21
363 TestNetworkPlugins/group/calico/ControllerPod 6.01
364 TestNetworkPlugins/group/custom-flannel/Start 63.6
365 TestNetworkPlugins/group/calico/KubeletFlags 0.39
366 TestNetworkPlugins/group/calico/NetCatPod 9.39
367 TestNetworkPlugins/group/calico/DNS 0.25
368 TestNetworkPlugins/group/calico/Localhost 0.23
369 TestNetworkPlugins/group/calico/HairPin 0.21
370 TestNetworkPlugins/group/enable-default-cni/Start 44.75
371 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.36
372 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.3
373 TestNetworkPlugins/group/custom-flannel/DNS 0.19
374 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
375 TestNetworkPlugins/group/custom-flannel/HairPin 0.18
376 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.39
377 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.48
378 TestNetworkPlugins/group/enable-default-cni/DNS 26.87
379 TestNetworkPlugins/group/flannel/Start 61.76
380 TestNetworkPlugins/group/enable-default-cni/Localhost 0.3
381 TestNetworkPlugins/group/enable-default-cni/HairPin 0.35
382 TestNetworkPlugins/group/bridge/Start 91.12
383 TestNetworkPlugins/group/flannel/ControllerPod 6.01
384 TestNetworkPlugins/group/flannel/KubeletFlags 0.41
385 TestNetworkPlugins/group/flannel/NetCatPod 10.3
386 TestNetworkPlugins/group/flannel/DNS 0.27
387 TestNetworkPlugins/group/flannel/Localhost 0.17
388 TestNetworkPlugins/group/flannel/HairPin 0.2
389 TestNetworkPlugins/group/bridge/KubeletFlags 0.28
390 TestNetworkPlugins/group/bridge/NetCatPod 9.27
391 TestNetworkPlugins/group/bridge/DNS 0.16
392 TestNetworkPlugins/group/bridge/Localhost 0.16
393 TestNetworkPlugins/group/bridge/HairPin 0.15
x
+
TestDownloadOnly/v1.20.0/json-events (9.82s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-994764 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-994764 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (9.819966175s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (9.82s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-994764
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-994764: exit status 85 (68.868611ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-994764 | jenkins | v1.33.1 | 17 Jun 24 11:35 UTC |          |
	|         | -p download-only-994764        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/17 11:35:33
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.22.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0617 11:35:33.711404  691247 out.go:291] Setting OutFile to fd 1 ...
	I0617 11:35:33.711624  691247 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:35:33.711651  691247 out.go:304] Setting ErrFile to fd 2...
	I0617 11:35:33.711673  691247 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:35:33.711928  691247 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19084-685849/.minikube/bin
	W0617 11:35:33.712093  691247 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19084-685849/.minikube/config/config.json: open /home/jenkins/minikube-integration/19084-685849/.minikube/config/config.json: no such file or directory
	I0617 11:35:33.712525  691247 out.go:298] Setting JSON to true
	I0617 11:35:33.713368  691247 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11881,"bootTime":1718612253,"procs":159,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1063-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0617 11:35:33.713493  691247 start.go:139] virtualization:  
	I0617 11:35:33.716064  691247 out.go:97] [download-only-994764] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	W0617 11:35:33.716208  691247 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/19084-685849/.minikube/cache/preloaded-tarball: no such file or directory
	I0617 11:35:33.717828  691247 out.go:169] MINIKUBE_LOCATION=19084
	I0617 11:35:33.716294  691247 notify.go:220] Checking for updates...
	I0617 11:35:33.721585  691247 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 11:35:33.723307  691247 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19084-685849/kubeconfig
	I0617 11:35:33.725223  691247 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19084-685849/.minikube
	I0617 11:35:33.726994  691247 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0617 11:35:33.730667  691247 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0617 11:35:33.730954  691247 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 11:35:33.751948  691247 docker.go:122] docker version: linux-26.1.4:Docker Engine - Community
	I0617 11:35:33.752059  691247 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0617 11:35:33.814849  691247 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-06-17 11:35:33.80545857 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1063-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214892544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d2d58213f83a351ca8f528a95fbd145f5654e957 Expected:d2d58213f83a351ca8f528a95fbd145f5654e957} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.1]] Warnings:<nil>}}
	I0617 11:35:33.814975  691247 docker.go:295] overlay module found
	I0617 11:35:33.816668  691247 out.go:97] Using the docker driver based on user configuration
	I0617 11:35:33.816693  691247 start.go:297] selected driver: docker
	I0617 11:35:33.816713  691247 start.go:901] validating driver "docker" against <nil>
	I0617 11:35:33.816852  691247 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0617 11:35:33.873860  691247 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-06-17 11:35:33.865472641 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1063-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214892544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d2d58213f83a351ca8f528a95fbd145f5654e957 Expected:d2d58213f83a351ca8f528a95fbd145f5654e957} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.1]] Warnings:<nil>}}
	I0617 11:35:33.874035  691247 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0617 11:35:33.874301  691247 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0617 11:35:33.874458  691247 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0617 11:35:33.876676  691247 out.go:169] Using Docker driver with root privileges
	I0617 11:35:33.878455  691247 cni.go:84] Creating CNI manager for ""
	I0617 11:35:33.878475  691247 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0617 11:35:33.878486  691247 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0617 11:35:33.878572  691247 start.go:340] cluster config:
	{Name:download-only-994764 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-994764 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 11:35:33.880223  691247 out.go:97] Starting "download-only-994764" primary control-plane node in "download-only-994764" cluster
	I0617 11:35:33.880244  691247 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0617 11:35:33.881681  691247 out.go:97] Pulling base image v0.0.44-1718296336-19068 ...
	I0617 11:35:33.881708  691247 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0617 11:35:33.881877  691247 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 in local docker daemon
	I0617 11:35:33.895240  691247 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 to local cache
	I0617 11:35:33.895418  691247 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 in local cache directory
	I0617 11:35:33.895536  691247 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 to local cache
	I0617 11:35:33.938728  691247 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0617 11:35:33.938756  691247 cache.go:56] Caching tarball of preloaded images
	I0617 11:35:33.939004  691247 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0617 11:35:33.941132  691247 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0617 11:35:33.941167  691247 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0617 11:35:34.052998  691247 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:7e3d48ccb9f143791669d02e14ce1643 -> /home/jenkins/minikube-integration/19084-685849/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-994764 host does not exist
	  To start a cluster, run: "minikube start -p download-only-994764"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-994764
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/json-events (9.44s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-968605 --force --alsologtostderr --kubernetes-version=v1.30.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-968605 --force --alsologtostderr --kubernetes-version=v1.30.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (9.435459481s)
--- PASS: TestDownloadOnly/v1.30.1/json-events (9.44s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/preload-exists
--- PASS: TestDownloadOnly/v1.30.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-968605
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-968605: exit status 85 (69.047398ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-994764 | jenkins | v1.33.1 | 17 Jun 24 11:35 UTC |                     |
	|         | -p download-only-994764        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 17 Jun 24 11:35 UTC | 17 Jun 24 11:35 UTC |
	| delete  | -p download-only-994764        | download-only-994764 | jenkins | v1.33.1 | 17 Jun 24 11:35 UTC | 17 Jun 24 11:35 UTC |
	| start   | -o=json --download-only        | download-only-968605 | jenkins | v1.33.1 | 17 Jun 24 11:35 UTC |                     |
	|         | -p download-only-968605        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/17 11:35:43
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.22.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0617 11:35:43.925452  691411 out.go:291] Setting OutFile to fd 1 ...
	I0617 11:35:43.925593  691411 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:35:43.925604  691411 out.go:304] Setting ErrFile to fd 2...
	I0617 11:35:43.925609  691411 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:35:43.925832  691411 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19084-685849/.minikube/bin
	I0617 11:35:43.926217  691411 out.go:298] Setting JSON to true
	I0617 11:35:43.927081  691411 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11891,"bootTime":1718612253,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1063-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0617 11:35:43.927149  691411 start.go:139] virtualization:  
	I0617 11:35:43.929389  691411 out.go:97] [download-only-968605] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0617 11:35:43.931140  691411 out.go:169] MINIKUBE_LOCATION=19084
	I0617 11:35:43.929589  691411 notify.go:220] Checking for updates...
	I0617 11:35:43.933030  691411 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 11:35:43.934976  691411 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19084-685849/kubeconfig
	I0617 11:35:43.936890  691411 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19084-685849/.minikube
	I0617 11:35:43.938581  691411 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0617 11:35:43.941805  691411 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0617 11:35:43.942116  691411 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 11:35:43.963304  691411 docker.go:122] docker version: linux-26.1.4:Docker Engine - Community
	I0617 11:35:43.963435  691411 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0617 11:35:44.025407  691411 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-06-17 11:35:44.014374747 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1063-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214892544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d2d58213f83a351ca8f528a95fbd145f5654e957 Expected:d2d58213f83a351ca8f528a95fbd145f5654e957} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.1]] Warnings:<nil>}}
	I0617 11:35:44.025519  691411 docker.go:295] overlay module found
	I0617 11:35:44.027756  691411 out.go:97] Using the docker driver based on user configuration
	I0617 11:35:44.027791  691411 start.go:297] selected driver: docker
	I0617 11:35:44.027814  691411 start.go:901] validating driver "docker" against <nil>
	I0617 11:35:44.027926  691411 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0617 11:35:44.082973  691411 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-06-17 11:35:44.073929146 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1063-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214892544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d2d58213f83a351ca8f528a95fbd145f5654e957 Expected:d2d58213f83a351ca8f528a95fbd145f5654e957} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.1]] Warnings:<nil>}}
	I0617 11:35:44.083153  691411 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0617 11:35:44.083532  691411 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0617 11:35:44.083760  691411 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0617 11:35:44.085590  691411 out.go:169] Using Docker driver with root privileges
	I0617 11:35:44.087589  691411 cni.go:84] Creating CNI manager for ""
	I0617 11:35:44.087617  691411 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0617 11:35:44.087639  691411 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0617 11:35:44.087721  691411 start.go:340] cluster config:
	{Name:download-only-968605 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:download-only-968605 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 11:35:44.089755  691411 out.go:97] Starting "download-only-968605" primary control-plane node in "download-only-968605" cluster
	I0617 11:35:44.089789  691411 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0617 11:35:44.091596  691411 out.go:97] Pulling base image v0.0.44-1718296336-19068 ...
	I0617 11:35:44.091627  691411 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime containerd
	I0617 11:35:44.091732  691411 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 in local docker daemon
	I0617 11:35:44.105435  691411 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 to local cache
	I0617 11:35:44.105565  691411 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 in local cache directory
	I0617 11:35:44.105584  691411 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 in local cache directory, skipping pull
	I0617 11:35:44.105589  691411 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 exists in cache, skipping pull
	I0617 11:35:44.105596  691411 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 as a tarball
	I0617 11:35:44.157925  691411 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.1/preloaded-images-k8s-v18-v1.30.1-containerd-overlay2-arm64.tar.lz4
	I0617 11:35:44.157948  691411 cache.go:56] Caching tarball of preloaded images
	I0617 11:35:44.158114  691411 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime containerd
	I0617 11:35:44.160330  691411 out.go:97] Downloading Kubernetes v1.30.1 preload ...
	I0617 11:35:44.160359  691411 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.1-containerd-overlay2-arm64.tar.lz4 ...
	I0617 11:35:44.265952  691411 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.1/preloaded-images-k8s-v18-v1.30.1-containerd-overlay2-arm64.tar.lz4?checksum=md5:793e35f7634e30f9002bcae3334a6957 -> /home/jenkins/minikube-integration/19084-685849/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-968605 host does not exist
	  To start a cluster, run: "minikube start -p download-only-968605"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.1/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-968605
--- PASS: TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.54s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-788244 --alsologtostderr --binary-mirror http://127.0.0.1:36423 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-788244" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-788244
--- PASS: TestBinaryMirror (0.54s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-134601
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-134601: exit status 85 (76.252088ms)

                                                
                                                
-- stdout --
	* Profile "addons-134601" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-134601"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-134601
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-134601: exit status 85 (88.869528ms)

                                                
                                                
-- stdout --
	* Profile "addons-134601" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-134601"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/Setup (154.11s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-134601 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-134601 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns: (2m34.112303095s)
--- PASS: TestAddons/Setup (154.11s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.65s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 44.878749ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-kb4t9" [7de14dcf-fed8-4a0e-80ba-1bb85acaa099] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003975064s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-8q9kp" [2f6405e9-dc4d-4d13-8f69-a273afd74af7] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004902621s
addons_test.go:342: (dbg) Run:  kubectl --context addons-134601 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-134601 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-134601 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.602368802s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-arm64 -p addons-134601 ip
2024/06/17 11:38:44 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-arm64 -p addons-134601 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.65s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.82s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-5dxsh" [f9b15878-c70f-4a1f-965e-07c833bdfb69] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.007323524s
addons_test.go:843: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-134601
addons_test.go:843: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-134601: (5.80730932s)
--- PASS: TestAddons/parallel/InspektorGadget (10.82s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.76s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 3.575302ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-q8m7p" [ae6a5fc6-8011-4441-a047-37a05043782d] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.00464582s
addons_test.go:417: (dbg) Run:  kubectl --context addons-134601 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-arm64 -p addons-134601 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.76s)

                                                
                                    
x
+
TestAddons/parallel/CSI (51.55s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:563: csi-hostpath-driver pods stabilized in 7.991273ms
addons_test.go:566: (dbg) Run:  kubectl --context addons-134601 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-134601 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-134601 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-134601 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-134601 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-134601 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-134601 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-134601 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-134601 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-134601 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-134601 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-134601 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-134601 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-134601 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-134601 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-134601 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-134601 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-134601 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-134601 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-134601 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-134601 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-134601 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:576: (dbg) Run:  kubectl --context addons-134601 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:581: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [8ce70214-4b6f-4169-b61f-22b6ebf8cdc6] Pending
helpers_test.go:344: "task-pv-pod" [8ce70214-4b6f-4169-b61f-22b6ebf8cdc6] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [8ce70214-4b6f-4169-b61f-22b6ebf8cdc6] Running
addons_test.go:581: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.003923778s
addons_test.go:586: (dbg) Run:  kubectl --context addons-134601 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:591: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-134601 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-134601 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:596: (dbg) Run:  kubectl --context addons-134601 delete pod task-pv-pod
addons_test.go:596: (dbg) Done: kubectl --context addons-134601 delete pod task-pv-pod: (1.066563432s)
addons_test.go:602: (dbg) Run:  kubectl --context addons-134601 delete pvc hpvc
addons_test.go:608: (dbg) Run:  kubectl --context addons-134601 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:613: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-134601 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-134601 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:618: (dbg) Run:  kubectl --context addons-134601 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:623: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [6f34943f-f09f-43e4-8776-dbc42c7cb2fd] Pending
helpers_test.go:344: "task-pv-pod-restore" [6f34943f-f09f-43e4-8776-dbc42c7cb2fd] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [6f34943f-f09f-43e4-8776-dbc42c7cb2fd] Running
addons_test.go:623: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003797256s
addons_test.go:628: (dbg) Run:  kubectl --context addons-134601 delete pod task-pv-pod-restore
addons_test.go:628: (dbg) Done: kubectl --context addons-134601 delete pod task-pv-pod-restore: (1.081686781s)
addons_test.go:632: (dbg) Run:  kubectl --context addons-134601 delete pvc hpvc-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-134601 delete volumesnapshot new-snapshot-demo
addons_test.go:640: (dbg) Run:  out/minikube-linux-arm64 -p addons-134601 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:640: (dbg) Done: out/minikube-linux-arm64 -p addons-134601 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.728338913s)
addons_test.go:644: (dbg) Run:  out/minikube-linux-arm64 -p addons-134601 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (51.55s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.08s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:826: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-134601 --alsologtostderr -v=1
addons_test.go:826: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-134601 --alsologtostderr -v=1: (1.07068031s)
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7fc69f7444-7sq4b" [718779fa-e8e5-400f-afe2-604eb3aa50a1] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7fc69f7444-7sq4b" [718779fa-e8e5-400f-afe2-604eb3aa50a1] Running
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.004302636s
--- PASS: TestAddons/parallel/Headlamp (11.08s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.62s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-cdw67" [37d3ce51-6e73-44c6-b7bf-65c2a9a101ac] Running
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004606359s
addons_test.go:862: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-134601
--- PASS: TestAddons/parallel/CloudSpanner (6.62s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.83s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:974: (dbg) Run:  kubectl --context addons-134601 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:980: (dbg) Run:  kubectl --context addons-134601 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:984: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-134601 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-134601 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-134601 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-134601 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-134601 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [e49b905b-571f-4876-a429-2fd3bc31cb09] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [e49b905b-571f-4876-a429-2fd3bc31cb09] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [e49b905b-571f-4876-a429-2fd3bc31cb09] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003445044s
addons_test.go:992: (dbg) Run:  kubectl --context addons-134601 get pvc test-pvc -o=json
addons_test.go:1001: (dbg) Run:  out/minikube-linux-arm64 -p addons-134601 ssh "cat /opt/local-path-provisioner/pvc-cb684f52-f0cd-415f-a4e5-c14b80d7b47b_default_test-pvc/file1"
addons_test.go:1013: (dbg) Run:  kubectl --context addons-134601 delete pod test-local-path
addons_test.go:1017: (dbg) Run:  kubectl --context addons-134601 delete pvc test-pvc
addons_test.go:1021: (dbg) Run:  out/minikube-linux-arm64 -p addons-134601 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1021: (dbg) Done: out/minikube-linux-arm64 -p addons-134601 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.38889969s)
--- PASS: TestAddons/parallel/LocalPath (51.83s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.6s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-q5vq2" [670a7d96-6767-4d7a-b66e-430d9fd9ea84] Running
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.00537482s
addons_test.go:1056: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-134601
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.60s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-5ddbf7d777-9bcsv" [c6d9923b-b618-44f2-8ae2-eabc3f84ff91] Running
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004246675s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/parallel/Volcano (162.66s)

                                                
                                                
=== RUN   TestAddons/parallel/Volcano
=== PAUSE TestAddons/parallel/Volcano

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Volcano
addons_test.go:905: volcano-controller stabilized in 8.059719ms
addons_test.go:889: volcano-scheduler stabilized in 9.04697ms
addons_test.go:897: volcano-admission stabilized in 9.338893ms
addons_test.go:911: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-765f888978-hl85n" [57dd3db7-3df7-455b-9837-9ad6823aed09] Running
addons_test.go:911: (dbg) TestAddons/parallel/Volcano: app=volcano-scheduler healthy within 5.003526911s
addons_test.go:915: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-7b497cf95b-js5b2" [e52ccab5-bfde-4ef9-89b5-aa94933abfac] Running
addons_test.go:915: (dbg) TestAddons/parallel/Volcano: app=volcano-admission healthy within 6.003061923s
addons_test.go:919: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controller-86c5446455-vgvh5" [cb162083-a367-4fa2-a117-c5677b9b0885] Running
addons_test.go:919: (dbg) TestAddons/parallel/Volcano: app=volcano-controller healthy within 5.004089696s
addons_test.go:924: (dbg) Run:  kubectl --context addons-134601 delete -n volcano-system job volcano-admission-init
addons_test.go:930: (dbg) Run:  kubectl --context addons-134601 create -f testdata/vcjob.yaml
addons_test.go:938: (dbg) Run:  kubectl --context addons-134601 get vcjob -n my-volcano
addons_test.go:956: (dbg) TestAddons/parallel/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [f0aa64cd-32f9-4629-a1ff-f99f9c3b5085] Pending
helpers_test.go:344: "test-job-nginx-0" [f0aa64cd-32f9-4629-a1ff-f99f9c3b5085] Pending: PodScheduled:Unschedulable (all nodes are unavailable: 1 node(s) resource fit failed.)
helpers_test.go:344: "test-job-nginx-0" [f0aa64cd-32f9-4629-a1ff-f99f9c3b5085] Pending: PodScheduled:Schedulable (Pod my-volcano/test-job-nginx-0 can possibly be assigned to addons-134601 once resource is released)
helpers_test.go:344: "test-job-nginx-0" [f0aa64cd-32f9-4629-a1ff-f99f9c3b5085] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [f0aa64cd-32f9-4629-a1ff-f99f9c3b5085] Running
addons_test.go:956: (dbg) TestAddons/parallel/Volcano: volcano.sh/job-name=test-job healthy within 2m16.003508733s
addons_test.go:960: (dbg) Run:  out/minikube-linux-arm64 -p addons-134601 addons disable volcano --alsologtostderr -v=1
addons_test.go:960: (dbg) Done: out/minikube-linux-arm64 -p addons-134601 addons disable volcano --alsologtostderr -v=1: (9.818716061s)
--- PASS: TestAddons/parallel/Volcano (162.66s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:652: (dbg) Run:  kubectl --context addons-134601 create ns new-namespace
addons_test.go:666: (dbg) Run:  kubectl --context addons-134601 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.17s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-134601
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-134601: (11.918336014s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-134601
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-134601
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-134601
--- PASS: TestAddons/StoppedEnableDisable (12.17s)

                                                
                                    
x
+
TestCertOptions (37.03s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-440034 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-440034 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (34.438596209s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-440034 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-440034 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-440034 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-440034" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-440034
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-440034: (1.957047608s)
--- PASS: TestCertOptions (37.03s)

                                                
                                    
x
+
TestCertExpiration (230.35s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-590735 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-590735 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (40.654912041s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-590735 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-590735 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (7.447989801s)
helpers_test.go:175: Cleaning up "cert-expiration-590735" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-590735
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-590735: (2.246196384s)
--- PASS: TestCertExpiration (230.35s)

                                                
                                    
x
+
TestForceSystemdFlag (38.8s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-681619 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-681619 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (36.329046661s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-681619 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-681619" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-681619
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-681619: (2.104865342s)
--- PASS: TestForceSystemdFlag (38.80s)

                                                
                                    
x
+
TestForceSystemdEnv (42.22s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-835812 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-835812 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (39.823184003s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-835812 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-835812" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-835812
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-835812: (2.046405152s)
--- PASS: TestForceSystemdEnv (42.22s)

                                                
                                    
x
+
TestDockerEnvContainerd (46.62s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-038331 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-038331 --driver=docker  --container-runtime=containerd: (30.515151335s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-038331"
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-vM2lFxztvLLh/agent.709636" SSH_AGENT_PID="709637" DOCKER_HOST=ssh://docker@127.0.0.1:33542 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-vM2lFxztvLLh/agent.709636" SSH_AGENT_PID="709637" DOCKER_HOST=ssh://docker@127.0.0.1:33542 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-vM2lFxztvLLh/agent.709636" SSH_AGENT_PID="709637" DOCKER_HOST=ssh://docker@127.0.0.1:33542 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.510905374s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-vM2lFxztvLLh/agent.709636" SSH_AGENT_PID="709637" DOCKER_HOST=ssh://docker@127.0.0.1:33542 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-038331" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-038331
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-038331: (2.138006148s)
--- PASS: TestDockerEnvContainerd (46.62s)

                                                
                                    
x
+
TestErrorSpam/setup (29.19s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-696962 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-696962 --driver=docker  --container-runtime=containerd
E0617 11:43:29.264122  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/addons-134601/client.crt: no such file or directory
E0617 11:43:29.270152  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/addons-134601/client.crt: no such file or directory
E0617 11:43:29.280374  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/addons-134601/client.crt: no such file or directory
E0617 11:43:29.301050  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/addons-134601/client.crt: no such file or directory
E0617 11:43:29.341274  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/addons-134601/client.crt: no such file or directory
E0617 11:43:29.421559  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/addons-134601/client.crt: no such file or directory
E0617 11:43:29.581923  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/addons-134601/client.crt: no such file or directory
E0617 11:43:29.902853  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/addons-134601/client.crt: no such file or directory
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-696962 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-696962 --driver=docker  --container-runtime=containerd: (29.191798092s)
--- PASS: TestErrorSpam/setup (29.19s)

                                                
                                    
x
+
TestErrorSpam/start (0.69s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-696962 --log_dir /tmp/nospam-696962 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-696962 --log_dir /tmp/nospam-696962 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-696962 --log_dir /tmp/nospam-696962 start --dry-run
E0617 11:43:30.543395  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/addons-134601/client.crt: no such file or directory
--- PASS: TestErrorSpam/start (0.69s)

                                                
                                    
x
+
TestErrorSpam/status (0.95s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-696962 --log_dir /tmp/nospam-696962 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-696962 --log_dir /tmp/nospam-696962 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-696962 --log_dir /tmp/nospam-696962 status
--- PASS: TestErrorSpam/status (0.95s)

                                                
                                    
x
+
TestErrorSpam/pause (1.63s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-696962 --log_dir /tmp/nospam-696962 pause
E0617 11:43:31.823634  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/addons-134601/client.crt: no such file or directory
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-696962 --log_dir /tmp/nospam-696962 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-696962 --log_dir /tmp/nospam-696962 pause
--- PASS: TestErrorSpam/pause (1.63s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.74s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-696962 --log_dir /tmp/nospam-696962 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-696962 --log_dir /tmp/nospam-696962 unpause
E0617 11:43:34.384394  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/addons-134601/client.crt: no such file or directory
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-696962 --log_dir /tmp/nospam-696962 unpause
--- PASS: TestErrorSpam/unpause (1.74s)

                                                
                                    
x
+
TestErrorSpam/stop (1.39s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-696962 --log_dir /tmp/nospam-696962 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-696962 --log_dir /tmp/nospam-696962 stop: (1.203015745s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-696962 --log_dir /tmp/nospam-696962 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-696962 --log_dir /tmp/nospam-696962 stop
--- PASS: TestErrorSpam/stop (1.39s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/19084-685849/.minikube/files/etc/test/nested/copy/691242/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (61.06s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-479738 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E0617 11:43:49.745773  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/addons-134601/client.crt: no such file or directory
E0617 11:44:10.226476  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/addons-134601/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-479738 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (1m1.055521156s)
--- PASS: TestFunctional/serial/StartWithProxy (61.06s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.9s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-479738 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-479738 --alsologtostderr -v=8: (5.890544346s)
functional_test.go:659: soft start took 5.892141172s for "functional-479738" cluster.
--- PASS: TestFunctional/serial/SoftStart (5.90s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-479738 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-479738 cache add registry.k8s.io/pause:3.1: (1.463465675s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-479738 cache add registry.k8s.io/pause:3.3: (1.405713994s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 cache add registry.k8s.io/pause:latest
E0617 11:44:51.187296  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/addons-134601/client.crt: no such file or directory
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-479738 cache add registry.k8s.io/pause:latest: (1.253446535s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.41s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-479738 /tmp/TestFunctionalserialCacheCmdcacheadd_local1021088354/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 cache add minikube-local-cache-test:functional-479738
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 cache delete minikube-local-cache-test:functional-479738
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-479738
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.41s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-479738 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (280.989347ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-479738 cache reload: (1.177760888s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 kubectl -- --context functional-479738 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-479738 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (46.08s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-479738 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-479738 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (46.072082661s)
functional_test.go:757: restart took 46.072206432s for "functional-479738" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (46.08s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-479738 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.74s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-479738 logs: (1.737613964s)
--- PASS: TestFunctional/serial/LogsCmd (1.74s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.86s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 logs --file /tmp/TestFunctionalserialLogsFileCmd1880390130/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-479738 logs --file /tmp/TestFunctionalserialLogsFileCmd1880390130/001/logs.txt: (1.86345343s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.86s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.42s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-479738 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-479738
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-479738: exit status 115 (371.908738ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32657 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-479738 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.42s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-479738 config get cpus: exit status 14 (75.223498ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-479738 config get cpus: exit status 14 (83.017776ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-479738 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-479738 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 723746: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.56s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-479738 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-479738 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (223.753375ms)

                                                
                                                
-- stdout --
	* [functional-479738] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19084
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19084-685849/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19084-685849/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 11:46:22.842289  723405 out.go:291] Setting OutFile to fd 1 ...
	I0617 11:46:22.842514  723405 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:46:22.842541  723405 out.go:304] Setting ErrFile to fd 2...
	I0617 11:46:22.842562  723405 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:46:22.842840  723405 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19084-685849/.minikube/bin
	I0617 11:46:22.843227  723405 out.go:298] Setting JSON to false
	I0617 11:46:22.844277  723405 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":12530,"bootTime":1718612253,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1063-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0617 11:46:22.844372  723405 start.go:139] virtualization:  
	I0617 11:46:22.846423  723405 out.go:177] * [functional-479738] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0617 11:46:22.847987  723405 out.go:177]   - MINIKUBE_LOCATION=19084
	I0617 11:46:22.849471  723405 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 11:46:22.848118  723405 notify.go:220] Checking for updates...
	I0617 11:46:22.851140  723405 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19084-685849/kubeconfig
	I0617 11:46:22.852661  723405 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19084-685849/.minikube
	I0617 11:46:22.854351  723405 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0617 11:46:22.856027  723405 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0617 11:46:22.858291  723405 config.go:182] Loaded profile config "functional-479738": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.1
	I0617 11:46:22.858903  723405 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 11:46:22.890094  723405 docker.go:122] docker version: linux-26.1.4:Docker Engine - Community
	I0617 11:46:22.890245  723405 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0617 11:46:22.997620  723405 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:53 SystemTime:2024-06-17 11:46:22.981871098 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1063-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214892544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d2d58213f83a351ca8f528a95fbd145f5654e957 Expected:d2d58213f83a351ca8f528a95fbd145f5654e957} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.1]] Warnings:<nil>}}
	I0617 11:46:22.997771  723405 docker.go:295] overlay module found
	I0617 11:46:23.001443  723405 out.go:177] * Using the docker driver based on existing profile
	I0617 11:46:23.004753  723405 start.go:297] selected driver: docker
	I0617 11:46:23.004792  723405 start.go:901] validating driver "docker" against &{Name:functional-479738 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:functional-479738 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 11:46:23.004928  723405 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0617 11:46:23.007573  723405 out.go:177] 
	W0617 11:46:23.009327  723405 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0617 11:46:23.010964  723405 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-479738 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-479738 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-479738 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (199.026961ms)

                                                
                                                
-- stdout --
	* [functional-479738] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19084
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19084-685849/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19084-685849/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 11:46:22.658400  723364 out.go:291] Setting OutFile to fd 1 ...
	I0617 11:46:22.658588  723364 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:46:22.658618  723364 out.go:304] Setting ErrFile to fd 2...
	I0617 11:46:22.658639  723364 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:46:22.658998  723364 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19084-685849/.minikube/bin
	I0617 11:46:22.659403  723364 out.go:298] Setting JSON to false
	I0617 11:46:22.660490  723364 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":12530,"bootTime":1718612253,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1063-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0617 11:46:22.660598  723364 start.go:139] virtualization:  
	I0617 11:46:22.662745  723364 out.go:177] * [functional-479738] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	I0617 11:46:22.665475  723364 out.go:177]   - MINIKUBE_LOCATION=19084
	I0617 11:46:22.665549  723364 notify.go:220] Checking for updates...
	I0617 11:46:22.669986  723364 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 11:46:22.672887  723364 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19084-685849/kubeconfig
	I0617 11:46:22.674510  723364 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19084-685849/.minikube
	I0617 11:46:22.676078  723364 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0617 11:46:22.677622  723364 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0617 11:46:22.680003  723364 config.go:182] Loaded profile config "functional-479738": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.1
	I0617 11:46:22.680540  723364 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 11:46:22.704397  723364 docker.go:122] docker version: linux-26.1.4:Docker Engine - Community
	I0617 11:46:22.704558  723364 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0617 11:46:22.776440  723364 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:53 SystemTime:2024-06-17 11:46:22.766761562 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1063-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214892544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d2d58213f83a351ca8f528a95fbd145f5654e957 Expected:d2d58213f83a351ca8f528a95fbd145f5654e957} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.1]] Warnings:<nil>}}
	I0617 11:46:22.776561  723364 docker.go:295] overlay module found
	I0617 11:46:22.779532  723364 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0617 11:46:22.781469  723364 start.go:297] selected driver: docker
	I0617 11:46:22.781505  723364 start.go:901] validating driver "docker" against &{Name:functional-479738 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718296336-19068@sha256:b31b1f456eebc10b590403d2cc052bb20a70156f4629e3514cbb38ecd550e2c8 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:functional-479738 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0617 11:46:22.781629  723364 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0617 11:46:22.784258  723364 out.go:177] 
	W0617 11:46:22.786079  723364 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0617 11:46:22.787698  723364 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-479738 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-479738 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-6f49f58cd5-mcshm" [7700d366-b52d-413a-919e-3e618ec67d06] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-6f49f58cd5-mcshm" [7700d366-b52d-413a-919e-3e618ec67d06] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.004173806s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.49.2:30750
functional_test.go:1671: http://192.168.49.2:30750: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-6f49f58cd5-mcshm

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30750
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.64s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (30s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [74de6bde-ae52-46f8-90fd-0c1a379818f9] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004279514s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-479738 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-479738 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-479738 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-479738 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-479738 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [f8a1f311-02a5-415a-8539-a64d9e0db3f0] Pending
helpers_test.go:344: "sp-pod" [f8a1f311-02a5-415a-8539-a64d9e0db3f0] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [f8a1f311-02a5-415a-8539-a64d9e0db3f0] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.003526058s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-479738 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-479738 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-479738 delete -f testdata/storage-provisioner/pod.yaml: (1.177529644s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-479738 apply -f testdata/storage-provisioner/pod.yaml
E0617 11:46:13.107691  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/addons-134601/client.crt: no such file or directory
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [b94e057f-ebaa-4627-9f85-562ee51bf6a8] Pending
helpers_test.go:344: "sp-pod" [b94e057f-ebaa-4627-9f85-562ee51bf6a8] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [b94e057f-ebaa-4627-9f85-562ee51bf6a8] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003438013s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-479738 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (30.00s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 ssh -n functional-479738 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 cp functional-479738:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1478364774/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 ssh -n functional-479738 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 ssh -n functional-479738 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.01s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/691242/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 ssh "sudo cat /etc/test/nested/copy/691242/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/691242.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 ssh "sudo cat /etc/ssl/certs/691242.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/691242.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 ssh "sudo cat /usr/share/ca-certificates/691242.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/6912422.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 ssh "sudo cat /etc/ssl/certs/6912422.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/6912422.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 ssh "sudo cat /usr/share/ca-certificates/6912422.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.94s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-479738 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-479738 ssh "sudo systemctl is-active docker": exit status 1 (341.350943ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-479738 ssh "sudo systemctl is-active crio": exit status 1 (313.862794ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-479738 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-479738 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-479738 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 721153: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-479738 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-479738 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-479738 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [06a2f6f1-4f88-4b90-b0fd-b4ce78cd2a07] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [06a2f6f1-4f88-4b90-b0fd-b4ce78cd2a07] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.003525817s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.31s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-479738 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.100.179.62 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-479738 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-479738 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-479738 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-65f5d5cc78-knmbq" [3752681b-c6c8-4cff-b3b6-f609461b325d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-65f5d5cc78-knmbq" [3752681b-c6c8-4cff-b3b6-f609461b325d] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.004444936s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 service list -o json
functional_test.go:1490: Took "509.151554ms" to run "out/minikube-linux-arm64 -p functional-479738 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.49.2:31000
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.49.2:31000
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1311: Took "332.487762ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1325: Took "54.324949ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1362: Took "309.19984ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1375: Took "51.005265ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-479738 /tmp/TestFunctionalparallelMountCmdany-port576323849/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1718624780689360602" to /tmp/TestFunctionalparallelMountCmdany-port576323849/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1718624780689360602" to /tmp/TestFunctionalparallelMountCmdany-port576323849/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1718624780689360602" to /tmp/TestFunctionalparallelMountCmdany-port576323849/001/test-1718624780689360602
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-479738 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (308.17262ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jun 17 11:46 created-by-test
-rw-r--r-- 1 docker docker 24 Jun 17 11:46 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jun 17 11:46 test-1718624780689360602
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 ssh cat /mount-9p/test-1718624780689360602
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-479738 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [2c5fcf8c-abe2-406e-bd14-662992276b28] Pending
helpers_test.go:344: "busybox-mount" [2c5fcf8c-abe2-406e-bd14-662992276b28] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [2c5fcf8c-abe2-406e-bd14-662992276b28] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [2c5fcf8c-abe2-406e-bd14-662992276b28] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003321055s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-479738 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-479738 /tmp/TestFunctionalparallelMountCmdany-port576323849/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.14s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-479738 /tmp/TestFunctionalparallelMountCmdspecific-port1499630349/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-479738 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (546.06688ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-479738 /tmp/TestFunctionalparallelMountCmdspecific-port1499630349/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-479738 ssh "sudo umount -f /mount-9p": exit status 1 (450.001606ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-479738 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-479738 /tmp/TestFunctionalparallelMountCmdspecific-port1499630349/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.45s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-479738 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3363329321/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-479738 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3363329321/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-479738 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3363329321/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-479738 ssh "findmnt -T" /mount1: exit status 1 (774.263656ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
2024/06/17 11:46:31 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-479738 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-479738 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3363329321/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-479738 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3363329321/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-479738 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3363329321/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.34s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-arm64 -p functional-479738 version -o=json --components: (1.269667824s)
--- PASS: TestFunctional/parallel/Version/components (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-479738 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.1
registry.k8s.io/kube-proxy:v1.30.1
registry.k8s.io/kube-controller-manager:v1.30.1
registry.k8s.io/kube-apiserver:v1.30.1
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-479738
docker.io/kindest/kindnetd:v20240513-cd2ac642
docker.io/kindest/kindnetd:v20240202-8f1494ea
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-479738 image ls --format short --alsologtostderr:
I0617 11:46:49.732378  726273 out.go:291] Setting OutFile to fd 1 ...
I0617 11:46:49.732946  726273 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0617 11:46:49.732990  726273 out.go:304] Setting ErrFile to fd 2...
I0617 11:46:49.733016  726273 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0617 11:46:49.734390  726273 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19084-685849/.minikube/bin
I0617 11:46:49.736568  726273 config.go:182] Loaded profile config "functional-479738": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.1
I0617 11:46:49.736773  726273 config.go:182] Loaded profile config "functional-479738": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.1
I0617 11:46:49.737292  726273 cli_runner.go:164] Run: docker container inspect functional-479738 --format={{.State.Status}}
I0617 11:46:49.757215  726273 ssh_runner.go:195] Run: systemctl --version
I0617 11:46:49.757281  726273 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-479738
I0617 11:46:49.777667  726273 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33552 SSHKeyPath:/home/jenkins/minikube-integration/19084-685849/.minikube/machines/functional-479738/id_rsa Username:docker}
I0617 11:46:49.875977  726273 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-479738 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| docker.io/kindest/kindnetd                  | v20240202-8f1494ea | sha256:4740c1 | 25.3MB |
| docker.io/library/nginx                     | latest             | sha256:11ceee | 67.7MB |
| registry.k8s.io/coredns/coredns             | v1.11.1            | sha256:2437cf | 16.5MB |
| registry.k8s.io/kube-apiserver              | v1.30.1            | sha256:988b55 | 29.9MB |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| docker.io/kindest/kindnetd                  | v20240513-cd2ac642 | sha256:89d73d | 25.8MB |
| docker.io/library/nginx                     | alpine             | sha256:4f4922 | 20.2MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/etcd                        | 3.5.12-0           | sha256:014faa | 66.2MB |
| registry.k8s.io/kube-scheduler              | v1.30.1            | sha256:163ff8 | 17.6MB |
| registry.k8s.io/pause                       | 3.9                | sha256:829e9d | 268kB  |
| docker.io/library/minikube-local-cache-test | functional-479738  | sha256:ce1c24 | 991B   |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/kube-controller-manager     | v1.30.1            | sha256:234ac5 | 28.4MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| registry.k8s.io/kube-proxy                  | v1.30.1            | sha256:05eccb | 25.6MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-479738 image ls --format table --alsologtostderr:
I0617 11:46:50.066173  726335 out.go:291] Setting OutFile to fd 1 ...
I0617 11:46:50.066410  726335 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0617 11:46:50.066439  726335 out.go:304] Setting ErrFile to fd 2...
I0617 11:46:50.066460  726335 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0617 11:46:50.066740  726335 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19084-685849/.minikube/bin
I0617 11:46:50.067506  726335 config.go:182] Loaded profile config "functional-479738": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.1
I0617 11:46:50.067666  726335 config.go:182] Loaded profile config "functional-479738": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.1
I0617 11:46:50.068138  726335 cli_runner.go:164] Run: docker container inspect functional-479738 --format={{.State.Status}}
I0617 11:46:50.090040  726335 ssh_runner.go:195] Run: systemctl --version
I0617 11:46:50.090094  726335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-479738
I0617 11:46:50.110737  726335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33552 SSHKeyPath:/home/jenkins/minikube-integration/19084-685849/.minikube/machines/functional-479738/id_rsa Username:docker}
I0617 11:46:50.216608  726335 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-479738 image ls --format json --alsologtostderr:
[{"id":"sha256:4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d","repoDigests":["docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988"],"repoTags":["docker.io/kindest/kindnetd:v20240202-8f1494ea"],"size":"25336339"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:ce1c241304d946b1aa6b72b3e82f4719d9e1e2c049d30f42747fe0e28ef22b5a","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-479738"],"size":"991"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:ba04bb24b95753201135cbc420b2
33c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:988b55d423baf54b1515b7560890c84d1822a8dfbdfcecfb2576f8cb5c2b28ee","repoDigests":["registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.1"],"size":"29929992"},{"id":"sha256:05eccb821e159092de1ef74c21cd16c403e1cda549ba56687391f50f087310ee","repoDigests":["registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.1"],"size":"25626174"},{"id":"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":
"268051"},{"id":"sha256:89d73d416b992e8f9602b67b4614d9e7f0655aebb3696e18efec695e0b654c40","repoDigests":["docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8"],"repoTags":["docker.io/kindest/kindnetd:v20240513-cd2ac642"],"size":"25795292"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:11ceee7cdc57225711b8382e1965974bbb259de14a9f5f7d6b9f161ced50a10a","repoDigests":["docker.io/library/nginx@sha256:56b388b0d79c738f4cf51bbaf184a14fab19337f4819ceb2cae7d94100262de8"],"repoTags":["docker.io/library/nginx:latest"],"size":"67668479"},{"id":"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1"],"repoTags":["registry.k8s.io/cored
ns/coredns:v1.11.1"],"size":"16482581"},{"id":"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd","repoDigests":["registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"66189079"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:4f49228258b642594e55baf98d153d0e85f3fb989c1eb8450c520ed77bf27e65","repoDigests":["docker.io/library/nginx@sha256:69f8c2c72671490607f52122be2af27d4fc09657ff57e42045801aa93d2090f7"],"repoTags":["docker.io/library/nginx:alpine"],"size":"20199152"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b13
5d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:234ac56e455bef8ae70f120f49fcf5daea5c4171c5ffbf8f9e791a516fef99b4","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.1"],"size":"28366573"},{"id":"sha256:163ff818d154da230c6c8a6211a0d939689ba04df75dba2c3742fe740ffdf44a","repoDigests":["registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.1"],"size":"17636403"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-479738 image ls --format json --alsologtostderr:
I0617 11:46:50.016627  726330 out.go:291] Setting OutFile to fd 1 ...
I0617 11:46:50.016851  726330 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0617 11:46:50.016884  726330 out.go:304] Setting ErrFile to fd 2...
I0617 11:46:50.016908  726330 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0617 11:46:50.017208  726330 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19084-685849/.minikube/bin
I0617 11:46:50.018014  726330 config.go:182] Loaded profile config "functional-479738": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.1
I0617 11:46:50.018212  726330 config.go:182] Loaded profile config "functional-479738": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.1
I0617 11:46:50.018825  726330 cli_runner.go:164] Run: docker container inspect functional-479738 --format={{.State.Status}}
I0617 11:46:50.056755  726330 ssh_runner.go:195] Run: systemctl --version
I0617 11:46:50.056808  726330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-479738
I0617 11:46:50.089683  726330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33552 SSHKeyPath:/home/jenkins/minikube-integration/19084-685849/.minikube/machines/functional-479738/id_rsa Username:docker}
I0617 11:46:50.188714  726330 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-479738 image ls --format yaml --alsologtostderr:
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:89d73d416b992e8f9602b67b4614d9e7f0655aebb3696e18efec695e0b654c40
repoDigests:
- docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8
repoTags:
- docker.io/kindest/kindnetd:v20240513-cd2ac642
size: "25795292"
- id: sha256:988b55d423baf54b1515b7560890c84d1822a8dfbdfcecfb2576f8cb5c2b28ee
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.1
size: "29929992"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:4f49228258b642594e55baf98d153d0e85f3fb989c1eb8450c520ed77bf27e65
repoDigests:
- docker.io/library/nginx@sha256:69f8c2c72671490607f52122be2af27d4fc09657ff57e42045801aa93d2090f7
repoTags:
- docker.io/library/nginx:alpine
size: "20199152"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:ce1c241304d946b1aa6b72b3e82f4719d9e1e2c049d30f42747fe0e28ef22b5a
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-479738
size: "991"
- id: sha256:05eccb821e159092de1ef74c21cd16c403e1cda549ba56687391f50f087310ee
repoDigests:
- registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c
repoTags:
- registry.k8s.io/kube-proxy:v1.30.1
size: "25626174"
- id: sha256:163ff818d154da230c6c8a6211a0d939689ba04df75dba2c3742fe740ffdf44a
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.1
size: "17636403"
- id: sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "268051"
- id: sha256:4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d
repoDigests:
- docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988
repoTags:
- docker.io/kindest/kindnetd:v20240202-8f1494ea
size: "25336339"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "16482581"
- id: sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd
repoDigests:
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "66189079"
- id: sha256:234ac56e455bef8ae70f120f49fcf5daea5c4171c5ffbf8f9e791a516fef99b4
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.1
size: "28366573"
- id: sha256:11ceee7cdc57225711b8382e1965974bbb259de14a9f5f7d6b9f161ced50a10a
repoDigests:
- docker.io/library/nginx@sha256:56b388b0d79c738f4cf51bbaf184a14fab19337f4819ceb2cae7d94100262de8
repoTags:
- docker.io/library/nginx:latest
size: "67668479"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-479738 image ls --format yaml --alsologtostderr:
I0617 11:46:49.750667  726274 out.go:291] Setting OutFile to fd 1 ...
I0617 11:46:49.750834  726274 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0617 11:46:49.750847  726274 out.go:304] Setting ErrFile to fd 2...
I0617 11:46:49.750853  726274 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0617 11:46:49.751106  726274 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19084-685849/.minikube/bin
I0617 11:46:49.751882  726274 config.go:182] Loaded profile config "functional-479738": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.1
I0617 11:46:49.752045  726274 config.go:182] Loaded profile config "functional-479738": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.1
I0617 11:46:49.752620  726274 cli_runner.go:164] Run: docker container inspect functional-479738 --format={{.State.Status}}
I0617 11:46:49.773459  726274 ssh_runner.go:195] Run: systemctl --version
I0617 11:46:49.773518  726274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-479738
I0617 11:46:49.801438  726274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33552 SSHKeyPath:/home/jenkins/minikube-integration/19084-685849/.minikube/machines/functional-479738/id_rsa Username:docker}
I0617 11:46:49.896762  726274 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-479738 ssh pgrep buildkitd: exit status 1 (257.489524ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 image build -t localhost/my-image:functional-479738 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-479738 image build -t localhost/my-image:functional-479738 testdata/build --alsologtostderr: (2.114922875s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-479738 image build -t localhost/my-image:functional-479738 testdata/build --alsologtostderr:
I0617 11:46:50.560948  726438 out.go:291] Setting OutFile to fd 1 ...
I0617 11:46:50.561706  726438 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0617 11:46:50.561720  726438 out.go:304] Setting ErrFile to fd 2...
I0617 11:46:50.561725  726438 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0617 11:46:50.561983  726438 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19084-685849/.minikube/bin
I0617 11:46:50.562644  726438 config.go:182] Loaded profile config "functional-479738": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.1
I0617 11:46:50.563368  726438 config.go:182] Loaded profile config "functional-479738": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.1
I0617 11:46:50.563941  726438 cli_runner.go:164] Run: docker container inspect functional-479738 --format={{.State.Status}}
I0617 11:46:50.581609  726438 ssh_runner.go:195] Run: systemctl --version
I0617 11:46:50.581682  726438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-479738
I0617 11:46:50.603280  726438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33552 SSHKeyPath:/home/jenkins/minikube-integration/19084-685849/.minikube/machines/functional-479738/id_rsa Username:docker}
I0617 11:46:50.691890  726438 build_images.go:161] Building image from path: /tmp/build.138436828.tar
I0617 11:46:50.691962  726438 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0617 11:46:50.700958  726438 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.138436828.tar
I0617 11:46:50.704373  726438 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.138436828.tar: stat -c "%s %y" /var/lib/minikube/build/build.138436828.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.138436828.tar': No such file or directory
I0617 11:46:50.704404  726438 ssh_runner.go:362] scp /tmp/build.138436828.tar --> /var/lib/minikube/build/build.138436828.tar (3072 bytes)
I0617 11:46:50.729104  726438 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.138436828
I0617 11:46:50.738106  726438 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.138436828 -xf /var/lib/minikube/build/build.138436828.tar
I0617 11:46:50.747550  726438 containerd.go:394] Building image: /var/lib/minikube/build/build.138436828
I0617 11:46:50.747648  726438 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.138436828 --local dockerfile=/var/lib/minikube/build/build.138436828 --output type=image,name=localhost/my-image:functional-479738
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.7s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.2s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:1c2a25a4ea72e625e6cef2e7a61f383f6b2c8c465cdd2ed86f9730a5f72b7c29 0.0s done
#8 exporting config sha256:639f5a2582ff82560775cfd789b90a30dd621f923ebb404d4dc892225c56e5ba 0.0s done
#8 naming to localhost/my-image:functional-479738 done
#8 DONE 0.2s
I0617 11:46:52.597982  726438 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.138436828 --local dockerfile=/var/lib/minikube/build/build.138436828 --output type=image,name=localhost/my-image:functional-479738: (1.850296939s)
I0617 11:46:52.598062  726438 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.138436828
I0617 11:46:52.608664  726438 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.138436828.tar
I0617 11:46:52.619412  726438 build_images.go:217] Built localhost/my-image:functional-479738 from /tmp/build.138436828.tar
I0617 11:46:52.619500  726438 build_images.go:133] succeeded building to: functional-479738
I0617 11:46:52.619513  726438 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.712648108s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-479738
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 image rm gcr.io/google-containers/addon-resizer:functional-479738 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-479738
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-479738 image save --daemon gcr.io/google-containers/addon-resizer:functional-479738 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-479738
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.61s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-479738
--- PASS: TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-479738
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-479738
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (127.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-071405 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0617 11:48:29.263732  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/addons-134601/client.crt: no such file or directory
E0617 11:48:56.948561  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/addons-134601/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-071405 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (2m6.310250088s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-071405 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (127.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (20.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-071405 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-071405 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-071405 -- rollout status deployment/busybox: (17.08855118s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-071405 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-071405 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-071405 -- exec busybox-fc5497c4f-95frx -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-071405 -- exec busybox-fc5497c4f-9pgzw -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-071405 -- exec busybox-fc5497c4f-n7px7 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-071405 -- exec busybox-fc5497c4f-95frx -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-071405 -- exec busybox-fc5497c4f-9pgzw -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-071405 -- exec busybox-fc5497c4f-n7px7 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-071405 -- exec busybox-fc5497c4f-95frx -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-071405 -- exec busybox-fc5497c4f-9pgzw -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-071405 -- exec busybox-fc5497c4f-n7px7 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (20.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-071405 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-071405 -- exec busybox-fc5497c4f-95frx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-071405 -- exec busybox-fc5497c4f-95frx -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-071405 -- exec busybox-fc5497c4f-9pgzw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-071405 -- exec busybox-fc5497c4f-9pgzw -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-071405 -- exec busybox-fc5497c4f-n7px7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-071405 -- exec busybox-fc5497c4f-n7px7 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (22.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-071405 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-071405 -v=7 --alsologtostderr: (21.473751794s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-071405 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-071405 status -v=7 --alsologtostderr: (1.024113402s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (22.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-071405 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-071405 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-linux-arm64 -p ha-071405 status --output json -v=7 --alsologtostderr: (1.021737576s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-071405 cp testdata/cp-test.txt ha-071405:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-071405 ssh -n ha-071405 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-071405 cp ha-071405:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3475376680/001/cp-test_ha-071405.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-071405 ssh -n ha-071405 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-071405 cp ha-071405:/home/docker/cp-test.txt ha-071405-m02:/home/docker/cp-test_ha-071405_ha-071405-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-071405 ssh -n ha-071405 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-071405 ssh -n ha-071405-m02 "sudo cat /home/docker/cp-test_ha-071405_ha-071405-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-071405 cp ha-071405:/home/docker/cp-test.txt ha-071405-m03:/home/docker/cp-test_ha-071405_ha-071405-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-071405 ssh -n ha-071405 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-071405 ssh -n ha-071405-m03 "sudo cat /home/docker/cp-test_ha-071405_ha-071405-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-071405 cp ha-071405:/home/docker/cp-test.txt ha-071405-m04:/home/docker/cp-test_ha-071405_ha-071405-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-071405 ssh -n ha-071405 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-071405 ssh -n ha-071405-m04 "sudo cat /home/docker/cp-test_ha-071405_ha-071405-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-071405 cp testdata/cp-test.txt ha-071405-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-071405 ssh -n ha-071405-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-071405 cp ha-071405-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3475376680/001/cp-test_ha-071405-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-071405 ssh -n ha-071405-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-071405 cp ha-071405-m02:/home/docker/cp-test.txt ha-071405:/home/docker/cp-test_ha-071405-m02_ha-071405.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-071405 ssh -n ha-071405-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-071405 ssh -n ha-071405 "sudo cat /home/docker/cp-test_ha-071405-m02_ha-071405.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-071405 cp ha-071405-m02:/home/docker/cp-test.txt ha-071405-m03:/home/docker/cp-test_ha-071405-m02_ha-071405-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-071405 ssh -n ha-071405-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-071405 ssh -n ha-071405-m03 "sudo cat /home/docker/cp-test_ha-071405-m02_ha-071405-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-071405 cp ha-071405-m02:/home/docker/cp-test.txt ha-071405-m04:/home/docker/cp-test_ha-071405-m02_ha-071405-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-071405 ssh -n ha-071405-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-071405 ssh -n ha-071405-m04 "sudo cat /home/docker/cp-test_ha-071405-m02_ha-071405-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-071405 cp testdata/cp-test.txt ha-071405-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-071405 ssh -n ha-071405-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-071405 cp ha-071405-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3475376680/001/cp-test_ha-071405-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-071405 ssh -n ha-071405-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-071405 cp ha-071405-m03:/home/docker/cp-test.txt ha-071405:/home/docker/cp-test_ha-071405-m03_ha-071405.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-071405 ssh -n ha-071405-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-071405 ssh -n ha-071405 "sudo cat /home/docker/cp-test_ha-071405-m03_ha-071405.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-071405 cp ha-071405-m03:/home/docker/cp-test.txt ha-071405-m02:/home/docker/cp-test_ha-071405-m03_ha-071405-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-071405 ssh -n ha-071405-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-071405 ssh -n ha-071405-m02 "sudo cat /home/docker/cp-test_ha-071405-m03_ha-071405-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-071405 cp ha-071405-m03:/home/docker/cp-test.txt ha-071405-m04:/home/docker/cp-test_ha-071405-m03_ha-071405-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-071405 ssh -n ha-071405-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-071405 ssh -n ha-071405-m04 "sudo cat /home/docker/cp-test_ha-071405-m03_ha-071405-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-071405 cp testdata/cp-test.txt ha-071405-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-071405 ssh -n ha-071405-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-071405 cp ha-071405-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3475376680/001/cp-test_ha-071405-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-071405 ssh -n ha-071405-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-071405 cp ha-071405-m04:/home/docker/cp-test.txt ha-071405:/home/docker/cp-test_ha-071405-m04_ha-071405.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-071405 ssh -n ha-071405-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-071405 ssh -n ha-071405 "sudo cat /home/docker/cp-test_ha-071405-m04_ha-071405.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-071405 cp ha-071405-m04:/home/docker/cp-test.txt ha-071405-m02:/home/docker/cp-test_ha-071405-m04_ha-071405-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-071405 ssh -n ha-071405-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-071405 ssh -n ha-071405-m02 "sudo cat /home/docker/cp-test_ha-071405-m04_ha-071405-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-071405 cp ha-071405-m04:/home/docker/cp-test.txt ha-071405-m03:/home/docker/cp-test_ha-071405-m04_ha-071405-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-071405 ssh -n ha-071405-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-071405 ssh -n ha-071405-m03 "sudo cat /home/docker/cp-test_ha-071405-m04_ha-071405-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-071405 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-071405 node stop m02 -v=7 --alsologtostderr: (12.138988546s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-071405 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-071405 status -v=7 --alsologtostderr: exit status 7 (720.247855ms)

                                                
                                                
-- stdout --
	ha-071405
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-071405-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-071405-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-071405-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 11:50:19.131318  741790 out.go:291] Setting OutFile to fd 1 ...
	I0617 11:50:19.131538  741790 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:50:19.131551  741790 out.go:304] Setting ErrFile to fd 2...
	I0617 11:50:19.131559  741790 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:50:19.131939  741790 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19084-685849/.minikube/bin
	I0617 11:50:19.132258  741790 out.go:298] Setting JSON to false
	I0617 11:50:19.132385  741790 mustload.go:65] Loading cluster: ha-071405
	I0617 11:50:19.132509  741790 notify.go:220] Checking for updates...
	I0617 11:50:19.133031  741790 config.go:182] Loaded profile config "ha-071405": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.1
	I0617 11:50:19.133056  741790 status.go:255] checking status of ha-071405 ...
	I0617 11:50:19.133867  741790 cli_runner.go:164] Run: docker container inspect ha-071405 --format={{.State.Status}}
	I0617 11:50:19.157837  741790 status.go:330] ha-071405 host status = "Running" (err=<nil>)
	I0617 11:50:19.157912  741790 host.go:66] Checking if "ha-071405" exists ...
	I0617 11:50:19.158264  741790 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-071405
	I0617 11:50:19.175762  741790 host.go:66] Checking if "ha-071405" exists ...
	I0617 11:50:19.176130  741790 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0617 11:50:19.176197  741790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-071405
	I0617 11:50:19.194438  741790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33557 SSHKeyPath:/home/jenkins/minikube-integration/19084-685849/.minikube/machines/ha-071405/id_rsa Username:docker}
	I0617 11:50:19.297001  741790 ssh_runner.go:195] Run: systemctl --version
	I0617 11:50:19.303530  741790 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 11:50:19.317511  741790 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0617 11:50:19.379315  741790 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:true NGoroutines:72 SystemTime:2024-06-17 11:50:19.369186051 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1063-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214892544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d2d58213f83a351ca8f528a95fbd145f5654e957 Expected:d2d58213f83a351ca8f528a95fbd145f5654e957} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.1]] Warnings:<nil>}}
	I0617 11:50:19.379954  741790 kubeconfig.go:125] found "ha-071405" server: "https://192.168.49.254:8443"
	I0617 11:50:19.379990  741790 api_server.go:166] Checking apiserver status ...
	I0617 11:50:19.380034  741790 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 11:50:19.391708  741790 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1521/cgroup
	I0617 11:50:19.402407  741790 api_server.go:182] apiserver freezer: "6:freezer:/docker/07bf5447e83cfb31687f62f0aab7ccefdb114f0368499b9717beb2fa0b43bcbf/kubepods/burstable/pod04c19827715e9298335efb4cabf8784a/7a4d8cafc74bb07f0fdc73eee10d16d87ea491396741c245094ddf241b741851"
	I0617 11:50:19.402484  741790 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/07bf5447e83cfb31687f62f0aab7ccefdb114f0368499b9717beb2fa0b43bcbf/kubepods/burstable/pod04c19827715e9298335efb4cabf8784a/7a4d8cafc74bb07f0fdc73eee10d16d87ea491396741c245094ddf241b741851/freezer.state
	I0617 11:50:19.412375  741790 api_server.go:204] freezer state: "THAWED"
	I0617 11:50:19.412409  741790 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0617 11:50:19.420588  741790 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0617 11:50:19.420620  741790 status.go:422] ha-071405 apiserver status = Running (err=<nil>)
	I0617 11:50:19.420632  741790 status.go:257] ha-071405 status: &{Name:ha-071405 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0617 11:50:19.420664  741790 status.go:255] checking status of ha-071405-m02 ...
	I0617 11:50:19.420974  741790 cli_runner.go:164] Run: docker container inspect ha-071405-m02 --format={{.State.Status}}
	I0617 11:50:19.437385  741790 status.go:330] ha-071405-m02 host status = "Stopped" (err=<nil>)
	I0617 11:50:19.437409  741790 status.go:343] host is not running, skipping remaining checks
	I0617 11:50:19.437417  741790 status.go:257] ha-071405-m02 status: &{Name:ha-071405-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0617 11:50:19.437476  741790 status.go:255] checking status of ha-071405-m03 ...
	I0617 11:50:19.437794  741790 cli_runner.go:164] Run: docker container inspect ha-071405-m03 --format={{.State.Status}}
	I0617 11:50:19.454887  741790 status.go:330] ha-071405-m03 host status = "Running" (err=<nil>)
	I0617 11:50:19.454914  741790 host.go:66] Checking if "ha-071405-m03" exists ...
	I0617 11:50:19.455410  741790 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-071405-m03
	I0617 11:50:19.471822  741790 host.go:66] Checking if "ha-071405-m03" exists ...
	I0617 11:50:19.472132  741790 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0617 11:50:19.472182  741790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-071405-m03
	I0617 11:50:19.489349  741790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33567 SSHKeyPath:/home/jenkins/minikube-integration/19084-685849/.minikube/machines/ha-071405-m03/id_rsa Username:docker}
	I0617 11:50:19.577828  741790 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 11:50:19.591616  741790 kubeconfig.go:125] found "ha-071405" server: "https://192.168.49.254:8443"
	I0617 11:50:19.591646  741790 api_server.go:166] Checking apiserver status ...
	I0617 11:50:19.591709  741790 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 11:50:19.603106  741790 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1325/cgroup
	I0617 11:50:19.613125  741790 api_server.go:182] apiserver freezer: "6:freezer:/docker/15537d89c4a7f30443f0a02c27c657e7b806b0ee2b2eff851f23c16ce2daa383/kubepods/burstable/pod4ac401065297b535b14d9d86bc4d942a/c118d52f6b8e132398da5e0ccdfdfdf40dec84858ea1d559f04b1ae799dca3f8"
	I0617 11:50:19.613221  741790 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/15537d89c4a7f30443f0a02c27c657e7b806b0ee2b2eff851f23c16ce2daa383/kubepods/burstable/pod4ac401065297b535b14d9d86bc4d942a/c118d52f6b8e132398da5e0ccdfdfdf40dec84858ea1d559f04b1ae799dca3f8/freezer.state
	I0617 11:50:19.623192  741790 api_server.go:204] freezer state: "THAWED"
	I0617 11:50:19.623224  741790 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0617 11:50:19.630944  741790 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0617 11:50:19.631012  741790 status.go:422] ha-071405-m03 apiserver status = Running (err=<nil>)
	I0617 11:50:19.631026  741790 status.go:257] ha-071405-m03 status: &{Name:ha-071405-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0617 11:50:19.631054  741790 status.go:255] checking status of ha-071405-m04 ...
	I0617 11:50:19.631360  741790 cli_runner.go:164] Run: docker container inspect ha-071405-m04 --format={{.State.Status}}
	I0617 11:50:19.649975  741790 status.go:330] ha-071405-m04 host status = "Running" (err=<nil>)
	I0617 11:50:19.650003  741790 host.go:66] Checking if "ha-071405-m04" exists ...
	I0617 11:50:19.650293  741790 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-071405-m04
	I0617 11:50:19.666254  741790 host.go:66] Checking if "ha-071405-m04" exists ...
	I0617 11:50:19.666548  741790 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0617 11:50:19.666621  741790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-071405-m04
	I0617 11:50:19.684441  741790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33572 SSHKeyPath:/home/jenkins/minikube-integration/19084-685849/.minikube/machines/ha-071405-m04/id_rsa Username:docker}
	I0617 11:50:19.777833  741790 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 11:50:19.792732  741790 status.go:257] ha-071405-m04 status: &{Name:ha-071405-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (18.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-071405 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-071405 node start m02 -v=7 --alsologtostderr: (17.269174223s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-071405 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-071405 status -v=7 --alsologtostderr: (1.034355085s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (18.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (140.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-071405 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-071405 -v=7 --alsologtostderr
E0617 11:50:51.335767  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/functional-479738/client.crt: no such file or directory
E0617 11:50:51.341027  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/functional-479738/client.crt: no such file or directory
E0617 11:50:51.351228  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/functional-479738/client.crt: no such file or directory
E0617 11:50:51.371511  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/functional-479738/client.crt: no such file or directory
E0617 11:50:51.411787  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/functional-479738/client.crt: no such file or directory
E0617 11:50:51.492061  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/functional-479738/client.crt: no such file or directory
E0617 11:50:51.652526  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/functional-479738/client.crt: no such file or directory
E0617 11:50:51.973314  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/functional-479738/client.crt: no such file or directory
E0617 11:50:52.613908  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/functional-479738/client.crt: no such file or directory
E0617 11:50:53.894456  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/functional-479738/client.crt: no such file or directory
E0617 11:50:56.455575  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/functional-479738/client.crt: no such file or directory
E0617 11:51:01.575833  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/functional-479738/client.crt: no such file or directory
E0617 11:51:11.816093  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/functional-479738/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-071405 -v=7 --alsologtostderr: (37.322779092s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-071405 --wait=true -v=7 --alsologtostderr
E0617 11:51:32.296884  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/functional-479738/client.crt: no such file or directory
E0617 11:52:13.257435  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/functional-479738/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-071405 --wait=true -v=7 --alsologtostderr: (1m42.600900861s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-071405
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (140.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-071405 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-071405 node delete m03 -v=7 --alsologtostderr: (10.985537083s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-071405 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-071405 stop -v=7 --alsologtostderr
E0617 11:53:29.263610  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/addons-134601/client.crt: no such file or directory
E0617 11:53:35.178498  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/functional-479738/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-071405 stop -v=7 --alsologtostderr: (35.897104929s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-071405 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-071405 status -v=7 --alsologtostderr: exit status 7 (110.093024ms)

                                                
                                                
-- stdout --
	ha-071405
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-071405-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-071405-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 11:53:48.032292  755457 out.go:291] Setting OutFile to fd 1 ...
	I0617 11:53:48.032484  755457 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:53:48.032497  755457 out.go:304] Setting ErrFile to fd 2...
	I0617 11:53:48.032502  755457 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 11:53:48.032808  755457 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19084-685849/.minikube/bin
	I0617 11:53:48.033052  755457 out.go:298] Setting JSON to false
	I0617 11:53:48.033103  755457 mustload.go:65] Loading cluster: ha-071405
	I0617 11:53:48.033214  755457 notify.go:220] Checking for updates...
	I0617 11:53:48.033667  755457 config.go:182] Loaded profile config "ha-071405": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.1
	I0617 11:53:48.033733  755457 status.go:255] checking status of ha-071405 ...
	I0617 11:53:48.034282  755457 cli_runner.go:164] Run: docker container inspect ha-071405 --format={{.State.Status}}
	I0617 11:53:48.055842  755457 status.go:330] ha-071405 host status = "Stopped" (err=<nil>)
	I0617 11:53:48.055879  755457 status.go:343] host is not running, skipping remaining checks
	I0617 11:53:48.055889  755457 status.go:257] ha-071405 status: &{Name:ha-071405 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0617 11:53:48.055939  755457 status.go:255] checking status of ha-071405-m02 ...
	I0617 11:53:48.056278  755457 cli_runner.go:164] Run: docker container inspect ha-071405-m02 --format={{.State.Status}}
	I0617 11:53:48.072943  755457 status.go:330] ha-071405-m02 host status = "Stopped" (err=<nil>)
	I0617 11:53:48.072965  755457 status.go:343] host is not running, skipping remaining checks
	I0617 11:53:48.072973  755457 status.go:257] ha-071405-m02 status: &{Name:ha-071405-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0617 11:53:48.072998  755457 status.go:255] checking status of ha-071405-m04 ...
	I0617 11:53:48.073307  755457 cli_runner.go:164] Run: docker container inspect ha-071405-m04 --format={{.State.Status}}
	I0617 11:53:48.090140  755457 status.go:330] ha-071405-m04 host status = "Stopped" (err=<nil>)
	I0617 11:53:48.090161  755457 status.go:343] host is not running, skipping remaining checks
	I0617 11:53:48.090168  755457 status.go:257] ha-071405-m04 status: &{Name:ha-071405-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (65.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-071405 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-071405 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m4.700444335s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-071405 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (65.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (44.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-071405 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-071405 --control-plane -v=7 --alsologtostderr: (43.783613733s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-071405 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (44.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.74s)

                                                
                                    
x
+
TestJSONOutput/start/Command (59.86s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-412800 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0617 11:55:51.335848  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/functional-479738/client.crt: no such file or directory
E0617 11:56:19.019474  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/functional-479738/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-412800 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (59.857961338s)
--- PASS: TestJSONOutput/start/Command (59.86s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.72s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-412800 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.72s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-412800 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.76s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-412800 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-412800 --output=json --user=testUser: (5.7611985s)
--- PASS: TestJSONOutput/stop/Command (5.76s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-088393 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-088393 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (82.177468ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b4edc4fd-abe0-4527-aeae-e10827cab7e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-088393] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"efb0d66c-ec50-42f1-9cf2-e12c4c4565ff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19084"}}
	{"specversion":"1.0","id":"56d1cb4b-c4d7-4448-bf36-b1ae4119cd8b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"35fc2443-841b-4647-add1-5a9f991fcb05","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19084-685849/kubeconfig"}}
	{"specversion":"1.0","id":"9ecfeed8-2b23-4e53-aab5-fda0f08f1dcd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19084-685849/.minikube"}}
	{"specversion":"1.0","id":"36b2beae-0d74-4d40-aef7-cff2b421b3d0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"dd8f2b5b-633b-45ce-a639-5ac6b2552f66","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e0a8ddea-5302-486d-9fb2-68515e73f6ca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-088393" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-088393
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (42.82s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-629249 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-629249 --network=: (40.776623056s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-629249" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-629249
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-629249: (2.022596121s)
--- PASS: TestKicCustomNetwork/create_custom_network (42.82s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (35.5s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-618736 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-618736 --network=bridge: (33.57979542s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-618736" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-618736
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-618736: (1.900939599s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (35.50s)

                                                
                                    
x
+
TestKicExistingNetwork (35.86s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-053695 --network=existing-network
E0617 11:58:29.263495  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/addons-134601/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-053695 --network=existing-network: (33.698364169s)
helpers_test.go:175: Cleaning up "existing-network-053695" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-053695
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-053695: (1.997497238s)
--- PASS: TestKicExistingNetwork (35.86s)

                                                
                                    
x
+
TestKicCustomSubnet (33.56s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-150731 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-150731 --subnet=192.168.60.0/24: (31.436639163s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-150731 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-150731" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-150731
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-150731: (2.101912887s)
--- PASS: TestKicCustomSubnet (33.56s)

                                                
                                    
x
+
TestKicStaticIP (38.81s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-975989 --static-ip=192.168.200.200
E0617 11:59:52.308947  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/addons-134601/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-975989 --static-ip=192.168.200.200: (36.540598996s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-975989 ip
helpers_test.go:175: Cleaning up "static-ip-975989" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-975989
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-975989: (2.141596527s)
--- PASS: TestKicStaticIP (38.81s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (69.25s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-264408 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-264408 --driver=docker  --container-runtime=containerd: (29.841767826s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-267435 --driver=docker  --container-runtime=containerd
E0617 12:00:51.336039  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/functional-479738/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-267435 --driver=docker  --container-runtime=containerd: (34.116921262s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-264408
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-267435
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-267435" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-267435
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-267435: (1.910521586s)
helpers_test.go:175: Cleaning up "first-264408" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-264408
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-264408: (2.18085861s)
--- PASS: TestMinikubeProfile (69.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.23s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-448998 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-448998 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.229443903s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.23s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-448998 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.41s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-462807 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-462807 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.407700289s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.41s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-462807 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.65s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-448998 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-448998 --alsologtostderr -v=5: (1.65141315s)
--- PASS: TestMountStart/serial/DeleteFirst (1.65s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-462807 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.19s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-462807
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-462807: (1.185606849s)
--- PASS: TestMountStart/serial/Stop (1.19s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.3s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-462807
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-462807: (7.303702004s)
--- PASS: TestMountStart/serial/RestartStopped (8.30s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-462807 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (71.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-137471 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-137471 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m11.250607315s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-137471 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (71.76s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-137471 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-137471 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-137471 -- rollout status deployment/busybox: (2.703643188s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-137471 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-137471 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-137471 -- exec busybox-fc5497c4f-2pvqb -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-137471 -- exec busybox-fc5497c4f-cvbxc -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-137471 -- exec busybox-fc5497c4f-2pvqb -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-137471 -- exec busybox-fc5497c4f-cvbxc -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-137471 -- exec busybox-fc5497c4f-2pvqb -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-137471 -- exec busybox-fc5497c4f-cvbxc -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.57s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-137471 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-137471 -- exec busybox-fc5497c4f-2pvqb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-137471 -- exec busybox-fc5497c4f-2pvqb -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-137471 -- exec busybox-fc5497c4f-cvbxc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-137471 -- exec busybox-fc5497c4f-cvbxc -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.95s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (16.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-137471 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-137471 -v 3 --alsologtostderr: (16.111841794s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-137471 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (16.79s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-137471 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.33s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-137471 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-137471 cp testdata/cp-test.txt multinode-137471:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-137471 ssh -n multinode-137471 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-137471 cp multinode-137471:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1118297353/001/cp-test_multinode-137471.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-137471 ssh -n multinode-137471 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-137471 cp multinode-137471:/home/docker/cp-test.txt multinode-137471-m02:/home/docker/cp-test_multinode-137471_multinode-137471-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-137471 ssh -n multinode-137471 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-137471 ssh -n multinode-137471-m02 "sudo cat /home/docker/cp-test_multinode-137471_multinode-137471-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-137471 cp multinode-137471:/home/docker/cp-test.txt multinode-137471-m03:/home/docker/cp-test_multinode-137471_multinode-137471-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-137471 ssh -n multinode-137471 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-137471 ssh -n multinode-137471-m03 "sudo cat /home/docker/cp-test_multinode-137471_multinode-137471-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-137471 cp testdata/cp-test.txt multinode-137471-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-137471 ssh -n multinode-137471-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-137471 cp multinode-137471-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1118297353/001/cp-test_multinode-137471-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-137471 ssh -n multinode-137471-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-137471 cp multinode-137471-m02:/home/docker/cp-test.txt multinode-137471:/home/docker/cp-test_multinode-137471-m02_multinode-137471.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-137471 ssh -n multinode-137471-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-137471 ssh -n multinode-137471 "sudo cat /home/docker/cp-test_multinode-137471-m02_multinode-137471.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-137471 cp multinode-137471-m02:/home/docker/cp-test.txt multinode-137471-m03:/home/docker/cp-test_multinode-137471-m02_multinode-137471-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-137471 ssh -n multinode-137471-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-137471 ssh -n multinode-137471-m03 "sudo cat /home/docker/cp-test_multinode-137471-m02_multinode-137471-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-137471 cp testdata/cp-test.txt multinode-137471-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-137471 ssh -n multinode-137471-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-137471 cp multinode-137471-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1118297353/001/cp-test_multinode-137471-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-137471 ssh -n multinode-137471-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-137471 cp multinode-137471-m03:/home/docker/cp-test.txt multinode-137471:/home/docker/cp-test_multinode-137471-m03_multinode-137471.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-137471 ssh -n multinode-137471-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-137471 ssh -n multinode-137471 "sudo cat /home/docker/cp-test_multinode-137471-m03_multinode-137471.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-137471 cp multinode-137471-m03:/home/docker/cp-test.txt multinode-137471-m02:/home/docker/cp-test_multinode-137471-m03_multinode-137471-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-137471 ssh -n multinode-137471-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-137471 ssh -n multinode-137471-m02 "sudo cat /home/docker/cp-test_multinode-137471-m03_multinode-137471-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.77s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-137471 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-137471 node stop m03: (1.208476413s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-137471 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-137471 status: exit status 7 (498.237938ms)

                                                
                                                
-- stdout --
	multinode-137471
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-137471-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-137471-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-137471 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-137471 status --alsologtostderr: exit status 7 (534.559705ms)

                                                
                                                
-- stdout --
	multinode-137471
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-137471-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-137471-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 12:03:28.041116  805771 out.go:291] Setting OutFile to fd 1 ...
	I0617 12:03:28.041283  805771 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 12:03:28.041297  805771 out.go:304] Setting ErrFile to fd 2...
	I0617 12:03:28.041303  805771 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 12:03:28.041578  805771 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19084-685849/.minikube/bin
	I0617 12:03:28.041816  805771 out.go:298] Setting JSON to false
	I0617 12:03:28.041847  805771 mustload.go:65] Loading cluster: multinode-137471
	I0617 12:03:28.041960  805771 notify.go:220] Checking for updates...
	I0617 12:03:28.042340  805771 config.go:182] Loaded profile config "multinode-137471": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.1
	I0617 12:03:28.042353  805771 status.go:255] checking status of multinode-137471 ...
	I0617 12:03:28.042913  805771 cli_runner.go:164] Run: docker container inspect multinode-137471 --format={{.State.Status}}
	I0617 12:03:28.061876  805771 status.go:330] multinode-137471 host status = "Running" (err=<nil>)
	I0617 12:03:28.061916  805771 host.go:66] Checking if "multinode-137471" exists ...
	I0617 12:03:28.062328  805771 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-137471
	I0617 12:03:28.080325  805771 host.go:66] Checking if "multinode-137471" exists ...
	I0617 12:03:28.080701  805771 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0617 12:03:28.080759  805771 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-137471
	I0617 12:03:28.111663  805771 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33677 SSHKeyPath:/home/jenkins/minikube-integration/19084-685849/.minikube/machines/multinode-137471/id_rsa Username:docker}
	I0617 12:03:28.205201  805771 ssh_runner.go:195] Run: systemctl --version
	I0617 12:03:28.209632  805771 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 12:03:28.221554  805771 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0617 12:03:28.280765  805771 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:62 SystemTime:2024-06-17 12:03:28.269887366 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1063-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214892544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d2d58213f83a351ca8f528a95fbd145f5654e957 Expected:d2d58213f83a351ca8f528a95fbd145f5654e957} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.1]] Warnings:<nil>}}
	I0617 12:03:28.281428  805771 kubeconfig.go:125] found "multinode-137471" server: "https://192.168.67.2:8443"
	I0617 12:03:28.281475  805771 api_server.go:166] Checking apiserver status ...
	I0617 12:03:28.281523  805771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0617 12:03:28.293114  805771 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1410/cgroup
	I0617 12:03:28.303547  805771 api_server.go:182] apiserver freezer: "6:freezer:/docker/bb1a796efc4b234ab73f94950f9d8a6799bc11545fe7d9cea5229b21b44d7a8a/kubepods/burstable/podce54c33e2be602b785b8296a016a1542/c427e2f0247d8a892ecd213ff0516dec22a1597a5688d76e215b0f497e596239"
	I0617 12:03:28.303640  805771 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/bb1a796efc4b234ab73f94950f9d8a6799bc11545fe7d9cea5229b21b44d7a8a/kubepods/burstable/podce54c33e2be602b785b8296a016a1542/c427e2f0247d8a892ecd213ff0516dec22a1597a5688d76e215b0f497e596239/freezer.state
	I0617 12:03:28.312935  805771 api_server.go:204] freezer state: "THAWED"
	I0617 12:03:28.312969  805771 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0617 12:03:28.321618  805771 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0617 12:03:28.321650  805771 status.go:422] multinode-137471 apiserver status = Running (err=<nil>)
	I0617 12:03:28.321662  805771 status.go:257] multinode-137471 status: &{Name:multinode-137471 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0617 12:03:28.321708  805771 status.go:255] checking status of multinode-137471-m02 ...
	I0617 12:03:28.322042  805771 cli_runner.go:164] Run: docker container inspect multinode-137471-m02 --format={{.State.Status}}
	I0617 12:03:28.338003  805771 status.go:330] multinode-137471-m02 host status = "Running" (err=<nil>)
	I0617 12:03:28.338029  805771 host.go:66] Checking if "multinode-137471-m02" exists ...
	I0617 12:03:28.338346  805771 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-137471-m02
	I0617 12:03:28.361321  805771 host.go:66] Checking if "multinode-137471-m02" exists ...
	I0617 12:03:28.361623  805771 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0617 12:03:28.361676  805771 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-137471-m02
	I0617 12:03:28.380848  805771 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33682 SSHKeyPath:/home/jenkins/minikube-integration/19084-685849/.minikube/machines/multinode-137471-m02/id_rsa Username:docker}
	I0617 12:03:28.474291  805771 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0617 12:03:28.485617  805771 status.go:257] multinode-137471-m02 status: &{Name:multinode-137471-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0617 12:03:28.485653  805771 status.go:255] checking status of multinode-137471-m03 ...
	I0617 12:03:28.485962  805771 cli_runner.go:164] Run: docker container inspect multinode-137471-m03 --format={{.State.Status}}
	I0617 12:03:28.502901  805771 status.go:330] multinode-137471-m03 host status = "Stopped" (err=<nil>)
	I0617 12:03:28.502922  805771 status.go:343] host is not running, skipping remaining checks
	I0617 12:03:28.502930  805771 status.go:257] multinode-137471-m03 status: &{Name:multinode-137471-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.24s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-137471 node start m03 -v=7 --alsologtostderr
E0617 12:03:29.263454  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/addons-134601/client.crt: no such file or directory
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-137471 node start m03 -v=7 --alsologtostderr: (9.535038275s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-137471 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.28s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (86.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-137471
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-137471
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-137471: (24.975743817s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-137471 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-137471 --wait=true -v=8 --alsologtostderr: (1m1.864567224s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-137471
--- PASS: TestMultiNode/serial/RestartKeepsNodes (86.96s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-137471 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-137471 node delete m03: (4.759410795s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-137471 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.44s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-137471 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-137471 stop: (23.805275445s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-137471 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-137471 status: exit status 7 (94.756976ms)

                                                
                                                
-- stdout --
	multinode-137471
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-137471-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-137471 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-137471 status --alsologtostderr: exit status 7 (88.804149ms)

                                                
                                                
-- stdout --
	multinode-137471
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-137471-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 12:05:35.144462  813436 out.go:291] Setting OutFile to fd 1 ...
	I0617 12:05:35.144660  813436 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 12:05:35.144691  813436 out.go:304] Setting ErrFile to fd 2...
	I0617 12:05:35.144713  813436 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 12:05:35.144964  813436 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19084-685849/.minikube/bin
	I0617 12:05:35.145174  813436 out.go:298] Setting JSON to false
	I0617 12:05:35.145238  813436 mustload.go:65] Loading cluster: multinode-137471
	I0617 12:05:35.145315  813436 notify.go:220] Checking for updates...
	I0617 12:05:35.145678  813436 config.go:182] Loaded profile config "multinode-137471": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.1
	I0617 12:05:35.145700  813436 status.go:255] checking status of multinode-137471 ...
	I0617 12:05:35.146532  813436 cli_runner.go:164] Run: docker container inspect multinode-137471 --format={{.State.Status}}
	I0617 12:05:35.165137  813436 status.go:330] multinode-137471 host status = "Stopped" (err=<nil>)
	I0617 12:05:35.165163  813436 status.go:343] host is not running, skipping remaining checks
	I0617 12:05:35.165171  813436 status.go:257] multinode-137471 status: &{Name:multinode-137471 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0617 12:05:35.165209  813436 status.go:255] checking status of multinode-137471-m02 ...
	I0617 12:05:35.165519  813436 cli_runner.go:164] Run: docker container inspect multinode-137471-m02 --format={{.State.Status}}
	I0617 12:05:35.181912  813436 status.go:330] multinode-137471-m02 host status = "Stopped" (err=<nil>)
	I0617 12:05:35.181935  813436 status.go:343] host is not running, skipping remaining checks
	I0617 12:05:35.181942  813436 status.go:257] multinode-137471-m02 status: &{Name:multinode-137471-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.99s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (51.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-137471 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0617 12:05:51.335153  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/functional-479738/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-137471 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (51.099407668s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-137471 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (51.73s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (37.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-137471
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-137471-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-137471-m02 --driver=docker  --container-runtime=containerd: exit status 14 (96.456712ms)

                                                
                                                
-- stdout --
	* [multinode-137471-m02] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19084
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19084-685849/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19084-685849/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-137471-m02' is duplicated with machine name 'multinode-137471-m02' in profile 'multinode-137471'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-137471-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-137471-m03 --driver=docker  --container-runtime=containerd: (35.341638085s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-137471
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-137471: exit status 80 (297.789095ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-137471 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-137471-m03 already exists in multinode-137471-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-137471-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-137471-m03: (1.906985389s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (37.69s)

                                                
                                    
x
+
TestPreload (106.14s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-181813 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
E0617 12:07:14.379665  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/functional-479738/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-181813 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m9.088913968s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-181813 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-181813 image pull gcr.io/k8s-minikube/busybox: (1.32845115s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-181813
E0617 12:08:29.263373  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/addons-134601/client.crt: no such file or directory
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-181813: (11.995764259s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-181813 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-181813 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (20.982048984s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-181813 image list
helpers_test.go:175: Cleaning up "test-preload-181813" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-181813
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-181813: (2.452654029s)
--- PASS: TestPreload (106.14s)

                                                
                                    
x
+
TestScheduledStopUnix (108.54s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-560617 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-560617 --memory=2048 --driver=docker  --container-runtime=containerd: (32.730460175s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-560617 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-560617 -n scheduled-stop-560617
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-560617 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-560617 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-560617 -n scheduled-stop-560617
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-560617
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-560617 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-560617
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-560617: exit status 7 (68.739455ms)

                                                
                                                
-- stdout --
	scheduled-stop-560617
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-560617 -n scheduled-stop-560617
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-560617 -n scheduled-stop-560617: exit status 7 (64.445284ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-560617" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-560617
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-560617: (4.304689234s)
--- PASS: TestScheduledStopUnix (108.54s)

                                                
                                    
x
+
TestInsufficientStorage (10.5s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-853933 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-853933 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (8.053342344s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"23617a41-9166-4fa6-8c63-5acb05eace3b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-853933] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9ed9f42c-31b3-433b-85bd-f1c96024f6c6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19084"}}
	{"specversion":"1.0","id":"0165b83f-52ea-4ace-9033-4ca4616b6ef4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ea1abe19-6df2-4081-98bd-d53dd6f96ec5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19084-685849/kubeconfig"}}
	{"specversion":"1.0","id":"97d019f2-a44e-4e8e-a431-597a0b03a957","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19084-685849/.minikube"}}
	{"specversion":"1.0","id":"d0ce9d15-4c8e-464f-8d1e-f4970358074c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"98d80b87-5260-4b7a-91e5-d7f91b9d829a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"ea5daf40-6fae-4cb3-bc05-f46968537574","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"029035f4-55c4-478c-9e67-d76761cea1d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"715770f2-c103-4a84-b854-981e9cdae952","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"44bf61d7-2f9f-4a20-bb40-344fe76ac324","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"145c2153-3225-46e1-8d45-0c9e326819bd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-853933\" primary control-plane node in \"insufficient-storage-853933\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"98d6485c-9371-4a26-9fe7-0c778f74e249","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.44-1718296336-19068 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"d77801b5-a332-4957-ae6e-18ef8f44a955","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"26a758bd-4256-4b46-b949-76190e4e4d6e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-853933 --output=json --layout=cluster
E0617 12:10:51.336236  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/functional-479738/client.crt: no such file or directory
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-853933 --output=json --layout=cluster: exit status 7 (263.26358ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-853933","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-853933","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0617 12:10:51.521956  831086 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-853933" does not appear in /home/jenkins/minikube-integration/19084-685849/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-853933 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-853933 --output=json --layout=cluster: exit status 7 (282.871218ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-853933","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-853933","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0617 12:10:51.807076  831138 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-853933" does not appear in /home/jenkins/minikube-integration/19084-685849/kubeconfig
	E0617 12:10:51.816700  831138 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/insufficient-storage-853933/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-853933" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-853933
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-853933: (1.904520286s)
--- PASS: TestInsufficientStorage (10.50s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (99.06s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3120351952 start -p running-upgrade-361411 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3120351952 start -p running-upgrade-361411 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (47.614579925s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-361411 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-361411 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (45.870133062s)
helpers_test.go:175: Cleaning up "running-upgrade-361411" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-361411
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-361411: (3.247064045s)
--- PASS: TestRunningBinaryUpgrade (99.06s)

                                                
                                    
x
+
TestKubernetesUpgrade (358.56s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-147768 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-147768 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m2.486439194s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-147768
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-147768: (1.251371001s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-147768 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-147768 status --format={{.Host}}: exit status 7 (66.378156ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-147768 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0617 12:13:29.263879  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/addons-134601/client.crt: no such file or directory
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-147768 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m45.25219189s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-147768 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-147768 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-147768 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (109.793807ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-147768] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19084
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19084-685849/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19084-685849/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-147768
	    minikube start -p kubernetes-upgrade-147768 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1477682 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.1, by running:
	    
	    minikube start -p kubernetes-upgrade-147768 --kubernetes-version=v1.30.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-147768 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-147768 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.518728558s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-147768" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-147768
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-147768: (2.714682754s)
--- PASS: TestKubernetesUpgrade (358.56s)

                                                
                                    
x
+
TestMissingContainerUpgrade (164.17s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.659877606 start -p missing-upgrade-591024 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.659877606 start -p missing-upgrade-591024 --memory=2200 --driver=docker  --container-runtime=containerd: (1m25.619180423s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-591024
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-591024: (10.576072437s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-591024
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-591024 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-591024 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m4.593045155s)
helpers_test.go:175: Cleaning up "missing-upgrade-591024" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-591024
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-591024: (2.224261396s)
--- PASS: TestMissingContainerUpgrade (164.17s)

                                                
                                    
x
+
TestPause/serial/Start (72.42s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-148086 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-148086 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m12.421157314s)
--- PASS: TestPause/serial/Start (72.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-408411 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-408411 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (89.913202ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-408411] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19084
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19084-685849/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19084-685849/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (36.62s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-408411 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-408411 --driver=docker  --container-runtime=containerd: (36.08820224s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-408411 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (36.62s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (7.79s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-408411 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-408411 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.547548442s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-408411 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-408411 status -o json: exit status 2 (294.049065ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-408411","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-408411
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-408411: (1.950243585s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (7.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-408411 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-408411 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.763010305s)
--- PASS: TestNoKubernetes/serial/Start (5.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-408411 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-408411 "sudo systemctl is-active --quiet service kubelet": exit status 1 (275.314153ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-408411
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-408411: (1.233829635s)
--- PASS: TestNoKubernetes/serial/Stop (1.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-408411 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-408411 --driver=docker  --container-runtime=containerd: (6.859301349s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-408411 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-408411 "sudo systemctl is-active --quiet service kubelet": exit status 1 (269.364966ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.81s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-148086 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-148086 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.795313804s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.81s)

                                                
                                    
x
+
TestPause/serial/Pause (0.94s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-148086 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.94s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.48s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-148086 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-148086 --output=json --layout=cluster: exit status 2 (481.668371ms)

                                                
                                                
-- stdout --
	{"Name":"pause-148086","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-148086","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.48s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.86s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-148086 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.86s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.12s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-148086 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-148086 --alsologtostderr -v=5: (1.1190067s)
--- PASS: TestPause/serial/PauseAgain (1.12s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.7s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-148086 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-148086 --alsologtostderr -v=5: (3.699033489s)
--- PASS: TestPause/serial/DeletePaused (3.70s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.15s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-148086
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-148086: exit status 1 (15.196713ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-148086: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.15s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.45s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.45s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (118.68s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1818772800 start -p stopped-upgrade-451724 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1818772800 start -p stopped-upgrade-451724 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (43.906303243s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1818772800 -p stopped-upgrade-451724 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1818772800 -p stopped-upgrade-451724 stop: (19.988738737s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-451724 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0617 12:15:51.335410  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/functional-479738/client.crt: no such file or directory
E0617 12:16:32.309576  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/addons-134601/client.crt: no such file or directory
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-451724 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (54.787591039s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (118.68s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.28s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-451724
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-451724: (1.281331047s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-064909 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-064909 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (237.474529ms)

                                                
                                                
-- stdout --
	* [false-064909] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19084
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19084-685849/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19084-685849/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0617 12:18:27.010642  868293 out.go:291] Setting OutFile to fd 1 ...
	I0617 12:18:27.010866  868293 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 12:18:27.010892  868293 out.go:304] Setting ErrFile to fd 2...
	I0617 12:18:27.010912  868293 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0617 12:18:27.011198  868293 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19084-685849/.minikube/bin
	I0617 12:18:27.011748  868293 out.go:298] Setting JSON to false
	I0617 12:18:27.012798  868293 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":14454,"bootTime":1718612253,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1063-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0617 12:18:27.012907  868293 start.go:139] virtualization:  
	I0617 12:18:27.016281  868293 out.go:177] * [false-064909] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0617 12:18:27.018751  868293 out.go:177]   - MINIKUBE_LOCATION=19084
	I0617 12:18:27.020416  868293 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0617 12:18:27.018980  868293 notify.go:220] Checking for updates...
	I0617 12:18:27.024200  868293 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19084-685849/kubeconfig
	I0617 12:18:27.026083  868293 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19084-685849/.minikube
	I0617 12:18:27.027778  868293 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0617 12:18:27.029295  868293 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0617 12:18:27.031561  868293 config.go:182] Loaded profile config "force-systemd-flag-681619": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.1
	I0617 12:18:27.031697  868293 driver.go:392] Setting default libvirt URI to qemu:///system
	I0617 12:18:27.065224  868293 docker.go:122] docker version: linux-26.1.4:Docker Engine - Community
	I0617 12:18:27.065360  868293 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0617 12:18:27.163796  868293 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:27 OomKillDisable:true NGoroutines:44 SystemTime:2024-06-17 12:18:27.15361754 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1063-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214892544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d2d58213f83a351ca8f528a95fbd145f5654e957 Expected:d2d58213f83a351ca8f528a95fbd145f5654e957} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.1]] Warnings:<nil>}}
	I0617 12:18:27.163905  868293 docker.go:295] overlay module found
	I0617 12:18:27.165959  868293 out.go:177] * Using the docker driver based on user configuration
	I0617 12:18:27.167798  868293 start.go:297] selected driver: docker
	I0617 12:18:27.167816  868293 start.go:901] validating driver "docker" against <nil>
	I0617 12:18:27.167885  868293 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0617 12:18:27.171097  868293 out.go:177] 
	W0617 12:18:27.173402  868293 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0617 12:18:27.180376  868293 out.go:177] 

                                                
                                                
** /stderr **
E0617 12:18:29.263996  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/addons-134601/client.crt: no such file or directory
net_test.go:88: 
----------------------- debugLogs start: false-064909 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-064909

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-064909

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-064909

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-064909

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-064909

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-064909

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-064909

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-064909

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-064909

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-064909

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-064909"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-064909"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-064909"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-064909

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-064909"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-064909"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-064909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-064909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-064909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-064909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-064909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-064909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-064909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-064909" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-064909"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-064909"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-064909"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-064909"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-064909"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-064909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-064909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-064909" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-064909"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-064909"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-064909"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-064909"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-064909"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-064909

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-064909"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-064909"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-064909"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-064909"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-064909"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-064909"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-064909"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-064909"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-064909"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-064909"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-064909"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-064909"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-064909"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-064909"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-064909"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-064909"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-064909"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-064909"

                                                
                                                
----------------------- debugLogs end: false-064909 [took: 4.849248647s] --------------------------------
helpers_test.go:175: Cleaning up "false-064909" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-064909
--- PASS: TestNetworkPlugins/group/false (5.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (151.83s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-440919 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0617 12:20:51.335205  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/functional-479738/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-440919 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m31.832670942s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (151.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-440919 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1a1b5b8b-b027-4c7d-b850-92f08d84646e] Pending
helpers_test.go:344: "busybox" [1a1b5b8b-b027-4c7d-b850-92f08d84646e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [1a1b5b8b-b027-4c7d-b850-92f08d84646e] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.00368769s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-440919 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-440919 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-440919 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.65s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-440919 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-440919 --alsologtostderr -v=3: (12.648322961s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.65s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (74.89s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-969284 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-969284 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.1: (1m14.887188519s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (74.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-440919 -n old-k8s-version-440919
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-440919 -n old-k8s-version-440919: exit status 7 (220.883534ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-440919 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.47s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-969284 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7144a55a-e010-4066-829e-9112b32667c3] Pending
helpers_test.go:344: "busybox" [7144a55a-e010-4066-829e-9112b32667c3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [7144a55a-e010-4066-829e-9112b32667c3] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.003474707s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-969284 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-969284 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-969284 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.052002732s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-969284 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-969284 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-969284 --alsologtostderr -v=3: (12.077989979s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-969284 -n no-preload-969284
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-969284 -n no-preload-969284: exit status 7 (80.997872ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-969284 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (289.58s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-969284 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.1
E0617 12:25:51.335294  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/functional-479738/client.crt: no such file or directory
E0617 12:28:29.263724  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/addons-134601/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-969284 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.1: (4m49.234489503s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-969284 -n no-preload-969284
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (289.58s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-jq8f7" [1ab627c3-25ec-4a16-8307-60918ec60035] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00441614s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-jq8f7" [1ab627c3-25ec-4a16-8307-60918ec60035] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004766434s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-440919 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-zvnwc" [7c017343-9856-4aff-b35b-c1ece731071b] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003604264s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-440919 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240513-cd2ac642
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.87s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-440919 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-440919 -n old-k8s-version-440919
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-440919 -n old-k8s-version-440919: exit status 2 (305.725734ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-440919 -n old-k8s-version-440919
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-440919 -n old-k8s-version-440919: exit status 2 (356.792632ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-440919 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-440919 -n old-k8s-version-440919
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-440919 -n old-k8s-version-440919
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.87s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-zvnwc" [7c017343-9856-4aff-b35b-c1ece731071b] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005187166s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-969284 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (65.76s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-589753 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-589753 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.1: (1m5.755846616s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (65.76s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-969284 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240513-cd2ac642
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (4.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-969284 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-969284 -n no-preload-969284
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-969284 -n no-preload-969284: exit status 2 (318.901046ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-969284 -n no-preload-969284
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-969284 -n no-preload-969284: exit status 2 (311.863279ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-969284 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p no-preload-969284 --alsologtostderr -v=1: (1.229145303s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-969284 -n no-preload-969284
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-969284 -n no-preload-969284
--- PASS: TestStartStop/group/no-preload/serial/Pause (4.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (63.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-499011 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-499011 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.1: (1m3.446323216s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (63.45s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-589753 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [50eae8ff-f8a8-4417-92ac-5e331523d5d8] Pending
helpers_test.go:344: "busybox" [50eae8ff-f8a8-4417-92ac-5e331523d5d8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [50eae8ff-f8a8-4417-92ac-5e331523d5d8] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004213282s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-589753 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-589753 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-589753 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.191885581s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-589753 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-499011 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7a8643e0-e33c-4258-b2ff-04b5a6b558a2] Pending
helpers_test.go:344: "busybox" [7a8643e0-e33c-4258-b2ff-04b5a6b558a2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [7a8643e0-e33c-4258-b2ff-04b5a6b558a2] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003106508s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-499011 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-589753 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-589753 --alsologtostderr -v=3: (12.125868231s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-499011 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-499011 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-499011 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-499011 --alsologtostderr -v=3: (12.072939554s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-589753 -n embed-certs-589753
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-589753 -n embed-certs-589753: exit status 7 (71.232406ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-589753 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (278.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-589753 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.1
E0617 12:30:51.335222  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/functional-479738/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-589753 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.1: (4m37.772803475s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-589753 -n embed-certs-589753
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (278.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-499011 -n default-k8s-diff-port-499011
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-499011 -n default-k8s-diff-port-499011: exit status 7 (154.051183ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-499011 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (269.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-499011 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.1
E0617 12:32:28.662709  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/old-k8s-version-440919/client.crt: no such file or directory
E0617 12:32:28.668118  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/old-k8s-version-440919/client.crt: no such file or directory
E0617 12:32:28.678417  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/old-k8s-version-440919/client.crt: no such file or directory
E0617 12:32:28.698668  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/old-k8s-version-440919/client.crt: no such file or directory
E0617 12:32:28.738936  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/old-k8s-version-440919/client.crt: no such file or directory
E0617 12:32:28.819319  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/old-k8s-version-440919/client.crt: no such file or directory
E0617 12:32:28.979693  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/old-k8s-version-440919/client.crt: no such file or directory
E0617 12:32:29.300106  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/old-k8s-version-440919/client.crt: no such file or directory
E0617 12:32:29.941029  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/old-k8s-version-440919/client.crt: no such file or directory
E0617 12:32:31.221722  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/old-k8s-version-440919/client.crt: no such file or directory
E0617 12:32:33.781877  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/old-k8s-version-440919/client.crt: no such file or directory
E0617 12:32:38.902315  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/old-k8s-version-440919/client.crt: no such file or directory
E0617 12:32:49.142507  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/old-k8s-version-440919/client.crt: no such file or directory
E0617 12:33:09.622791  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/old-k8s-version-440919/client.crt: no such file or directory
E0617 12:33:12.310210  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/addons-134601/client.crt: no such file or directory
E0617 12:33:29.263894  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/addons-134601/client.crt: no such file or directory
E0617 12:33:50.583619  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/old-k8s-version-440919/client.crt: no such file or directory
E0617 12:34:04.191823  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/no-preload-969284/client.crt: no such file or directory
E0617 12:34:04.197268  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/no-preload-969284/client.crt: no such file or directory
E0617 12:34:04.207677  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/no-preload-969284/client.crt: no such file or directory
E0617 12:34:04.228038  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/no-preload-969284/client.crt: no such file or directory
E0617 12:34:04.268365  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/no-preload-969284/client.crt: no such file or directory
E0617 12:34:04.348691  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/no-preload-969284/client.crt: no such file or directory
E0617 12:34:04.509105  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/no-preload-969284/client.crt: no such file or directory
E0617 12:34:04.829526  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/no-preload-969284/client.crt: no such file or directory
E0617 12:34:05.469746  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/no-preload-969284/client.crt: no such file or directory
E0617 12:34:06.749955  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/no-preload-969284/client.crt: no such file or directory
E0617 12:34:09.310424  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/no-preload-969284/client.crt: no such file or directory
E0617 12:34:14.431106  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/no-preload-969284/client.crt: no such file or directory
E0617 12:34:24.672000  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/no-preload-969284/client.crt: no such file or directory
E0617 12:34:45.152288  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/no-preload-969284/client.crt: no such file or directory
E0617 12:35:12.504552  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/old-k8s-version-440919/client.crt: no such file or directory
E0617 12:35:26.113418  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/no-preload-969284/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-499011 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.1: (4m28.865070677s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-499011 -n default-k8s-diff-port-499011
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (269.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-n2c2r" [ee8c4be0-998f-4192-9b29-739533cdc482] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003782074s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-52cs5" [3ea6c39d-3c08-449a-b445-32b6c77caf5a] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00351804s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-n2c2r" [ee8c4be0-998f-4192-9b29-739533cdc482] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003502487s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-589753 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-52cs5" [3ea6c39d-3c08-449a-b445-32b6c77caf5a] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004566917s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-499011 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-589753 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240513-cd2ac642
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.81s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-589753 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-589753 -n embed-certs-589753
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-589753 -n embed-certs-589753: exit status 2 (400.857291ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-589753 -n embed-certs-589753
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-589753 -n embed-certs-589753: exit status 2 (370.136243ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-589753 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p embed-certs-589753 --alsologtostderr -v=1: (1.005989756s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-589753 -n embed-certs-589753
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-589753 -n embed-certs-589753
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.81s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-499011 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240513-cd2ac642
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-499011 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-499011 --alsologtostderr -v=1: (1.047975785s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-499011 -n default-k8s-diff-port-499011
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-499011 -n default-k8s-diff-port-499011: exit status 2 (382.29122ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-499011 -n default-k8s-diff-port-499011
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-499011 -n default-k8s-diff-port-499011: exit status 2 (496.916068ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-499011 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p default-k8s-diff-port-499011 --alsologtostderr -v=1: (1.296201074s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-499011 -n default-k8s-diff-port-499011
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-499011 -n default-k8s-diff-port-499011
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.50s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (55.65s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-988702 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-988702 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.1: (55.654343519s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (55.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (73.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-064909 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
E0617 12:35:51.335729  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/functional-479738/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-064909 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m13.399937988s)
--- PASS: TestNetworkPlugins/group/auto/Start (73.40s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-988702 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-988702 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.229476661s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-988702 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-988702 --alsologtostderr -v=3: (1.357347107s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-988702 -n newest-cni-988702
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-988702 -n newest-cni-988702: exit status 7 (120.148511ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-988702 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (17.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-988702 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.1
E0617 12:36:48.038071  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/no-preload-969284/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-988702 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.1: (16.670284668s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-988702 -n newest-cni-988702
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (17.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-064909 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-064909 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-24g7l" [6239d596-b9bb-426b-92ed-ff8bf463a719] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-24g7l" [6239d596-b9bb-426b-92ed-ff8bf463a719] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.00350345s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.45s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-988702 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240513-cd2ac642
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-988702 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p newest-cni-988702 --alsologtostderr -v=1: (1.033920341s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-988702 -n newest-cni-988702
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-988702 -n newest-cni-988702: exit status 2 (377.845171ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-988702 -n newest-cni-988702
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-988702 -n newest-cni-988702: exit status 2 (402.712764ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-988702 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-988702 -n newest-cni-988702
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-988702 -n newest-cni-988702
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.46s)
E0617 12:42:13.372320  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/auto-064909/client.crt: no such file or directory
E0617 12:42:23.613349  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/auto-064909/client.crt: no such file or directory
E0617 12:42:28.663071  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/old-k8s-version-440919/client.crt: no such file or directory
E0617 12:42:44.094132  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/auto-064909/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (62.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-064909 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-064909 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m2.884956707s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (62.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-064909 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-064909 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-064909 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (67.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-064909 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
E0617 12:37:56.345404  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/old-k8s-version-440919/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-064909 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m7.805321786s)
--- PASS: TestNetworkPlugins/group/calico/Start (67.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-66bwn" [4ad84f4f-775d-4a0e-8e43-fff44557e959] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005501613s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-064909 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-064909 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-4mlqc" [8ef4d892-d24d-4726-b7eb-38ec036dd9c2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-4mlqc" [8ef4d892-d24d-4726-b7eb-38ec036dd9c2] Running
E0617 12:38:29.264154  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/addons-134601/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.003913493s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-064909 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-064909 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-064909 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-kmjgs" [f40d3aa6-7f8c-42e1-9065-bd016fdaf13c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005950388s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (63.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-064909 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-064909 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (1m3.595360636s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (63.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-064909 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-064909 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-np5cz" [8fa71378-08b1-4b13-b671-5cee766847d2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-np5cz" [8fa71378-08b1-4b13-b671-5cee766847d2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.003978147s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-064909 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-064909 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-064909 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (44.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-064909 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
E0617 12:39:31.878550  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/no-preload-969284/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-064909 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (44.750402568s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (44.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-064909 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-064909 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-nhhq8" [b50e5609-b417-4101-9fe8-504274c43009] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-nhhq8" [b50e5609-b417-4101-9fe8-504274c43009] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004633561s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-064909 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-064909 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-064909 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-064909 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-064909 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-zs8nr" [9b3a2b88-7620-4ebb-a73b-b06a9013cfe6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-zs8nr" [9b3a2b88-7620-4ebb-a73b-b06a9013cfe6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.003882901s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (26.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-064909 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context enable-default-cni-064909 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.306509213s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-064909 exec deployment/netcat -- nslookup kubernetes.default
E0617 12:40:42.986498  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/default-k8s-diff-port-499011/client.crt: no such file or directory
E0617 12:40:48.106996  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/default-k8s-diff-port-499011/client.crt: no such file or directory
E0617 12:40:51.335800  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/functional-479738/client.crt: no such file or directory
net_test.go:175: (dbg) Done: kubectl --context enable-default-cni-064909 exec deployment/netcat -- nslookup kubernetes.default: (10.261859819s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (26.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (61.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-064909 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E0617 12:40:34.381448  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/functional-479738/client.crt: no such file or directory
E0617 12:40:37.864678  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/default-k8s-diff-port-499011/client.crt: no such file or directory
E0617 12:40:37.870135  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/default-k8s-diff-port-499011/client.crt: no such file or directory
E0617 12:40:37.880405  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/default-k8s-diff-port-499011/client.crt: no such file or directory
E0617 12:40:37.900671  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/default-k8s-diff-port-499011/client.crt: no such file or directory
E0617 12:40:37.941033  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/default-k8s-diff-port-499011/client.crt: no such file or directory
E0617 12:40:38.021890  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/default-k8s-diff-port-499011/client.crt: no such file or directory
E0617 12:40:38.182667  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/default-k8s-diff-port-499011/client.crt: no such file or directory
E0617 12:40:38.503792  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/default-k8s-diff-port-499011/client.crt: no such file or directory
E0617 12:40:39.144914  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/default-k8s-diff-port-499011/client.crt: no such file or directory
E0617 12:40:40.425334  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/default-k8s-diff-port-499011/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-064909 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (1m1.758144421s)
--- PASS: TestNetworkPlugins/group/flannel/Start (61.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-064909 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-064909 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (91.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-064909 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
E0617 12:41:18.828072  691242 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19084-685849/.minikube/profiles/default-k8s-diff-port-499011/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-064909 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m31.118681839s)
--- PASS: TestNetworkPlugins/group/bridge/Start (91.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-z6vwh" [716a17db-7594-488c-bf11-3699ca58b621] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004909418s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-064909 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-064909 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-6pm8k" [18b5014d-d50f-4164-999b-5ab6d4f18e68] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-6pm8k" [18b5014d-d50f-4164-999b-5ab6d4f18e68] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004269172s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-064909 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-064909 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-064909 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-064909 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-064909 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-fk8z5" [d8781567-3fce-4335-bbe5-ad4c3648c4de] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-fk8z5" [d8781567-3fce-4335-bbe5-ad4c3648c4de] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004189446s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-064909 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-064909 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-064909 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    

Test skip (28/328)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.54s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-079460 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-079460" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-079460
--- SKIP: TestDownloadOnlyKic (0.54s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-918593" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-918593
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-064909 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-064909

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-064909

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-064909

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-064909

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-064909

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-064909

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-064909

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-064909

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-064909

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-064909

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-064909"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-064909"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-064909"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-064909

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-064909"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-064909"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-064909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-064909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-064909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-064909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-064909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-064909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-064909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-064909" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-064909"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-064909"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-064909"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-064909"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-064909"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-064909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-064909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-064909" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-064909"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-064909"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-064909"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-064909"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-064909"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-064909

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-064909"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-064909"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-064909"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-064909"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-064909"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-064909"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-064909"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-064909"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-064909"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-064909"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-064909"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-064909"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-064909"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-064909"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-064909"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-064909"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-064909"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-064909"

                                                
                                                
----------------------- debugLogs end: kubenet-064909 [took: 4.595989542s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-064909" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-064909
--- SKIP: TestNetworkPlugins/group/kubenet (4.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-064909 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-064909

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-064909

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-064909

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-064909

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-064909

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-064909

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-064909

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-064909

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-064909

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-064909

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-064909"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-064909"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-064909"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-064909

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-064909"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-064909"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-064909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-064909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-064909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-064909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-064909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-064909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-064909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-064909" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-064909"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-064909"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-064909"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-064909"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-064909"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-064909

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-064909

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-064909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-064909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-064909

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-064909

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-064909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-064909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-064909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-064909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-064909" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-064909"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-064909"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-064909"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-064909"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-064909"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-064909

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-064909"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-064909"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-064909"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-064909"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-064909"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-064909"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-064909"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-064909"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-064909"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-064909"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-064909"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-064909"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-064909"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-064909"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-064909"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-064909"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-064909"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-064909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-064909"

                                                
                                                
----------------------- debugLogs end: cilium-064909 [took: 4.792455194s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-064909" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-064909
--- SKIP: TestNetworkPlugins/group/cilium (5.00s)

                                                
                                    
Copied to clipboard