Test Report: Docker_Linux_containerd_arm64 19106

                    
                      0b6579e93b2a9bd368d98c5e9e3374097121bbca:2024-06-20:34974
                    
                

Test fail (7/328)

x
+
TestAddons/parallel/Ingress (35.82s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-527088 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-527088 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-527088 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [22f7b82a-15f3-4f70-b3bf-189cadb78fd2] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [22f7b82a-15f3-4f70-b3bf-189cadb78fd2] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.003903967s
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-527088 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-527088 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-527088 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:299: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.081019736s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:301: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:305: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p addons-527088 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-arm64 -p addons-527088 addons disable ingress-dns --alsologtostderr -v=1: (1.005414329s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-arm64 -p addons-527088 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-arm64 -p addons-527088 addons disable ingress --alsologtostderr -v=1: (7.784444927s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-527088
helpers_test.go:235: (dbg) docker inspect addons-527088:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "164a3d8e2d6057c6729819437cd7ccd1d6f15eaa11fa3d711ebf6bc54171d36a",
	        "Created": "2024-06-20T17:55:56.905084966Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 280775,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-06-20T17:55:57.226649284Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:d01e921d87b5c98766e198911bba95096a87baa7b20caabee6d66ddda3a30e16",
	        "ResolvConfPath": "/var/lib/docker/containers/164a3d8e2d6057c6729819437cd7ccd1d6f15eaa11fa3d711ebf6bc54171d36a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/164a3d8e2d6057c6729819437cd7ccd1d6f15eaa11fa3d711ebf6bc54171d36a/hostname",
	        "HostsPath": "/var/lib/docker/containers/164a3d8e2d6057c6729819437cd7ccd1d6f15eaa11fa3d711ebf6bc54171d36a/hosts",
	        "LogPath": "/var/lib/docker/containers/164a3d8e2d6057c6729819437cd7ccd1d6f15eaa11fa3d711ebf6bc54171d36a/164a3d8e2d6057c6729819437cd7ccd1d6f15eaa11fa3d711ebf6bc54171d36a-json.log",
	        "Name": "/addons-527088",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-527088:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-527088",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/bebbe0f9276418282a1aca3fe171b673317d67d1a42196c401165881425d9f9a-init/diff:/var/lib/docker/overlay2/2993e2c9fcbb886b1475733978fee74bf42199db877e5d5079a8d8df185eaf52/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bebbe0f9276418282a1aca3fe171b673317d67d1a42196c401165881425d9f9a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bebbe0f9276418282a1aca3fe171b673317d67d1a42196c401165881425d9f9a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bebbe0f9276418282a1aca3fe171b673317d67d1a42196c401165881425d9f9a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-527088",
	                "Source": "/var/lib/docker/volumes/addons-527088/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-527088",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718753665-19106@sha256:735aacbd61d487240dc39ba6e4d70dd6ae1ad6181ca2ba092d372605e48ee636",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-527088",
	                "name.minikube.sigs.k8s.io": "addons-527088",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c8d1ef6fbf9ea8353cb25bada8d942d3aaf02a8e966dc342660e5682d4b064ab",
	            "SandboxKey": "/var/run/docker/netns/c8d1ef6fbf9e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-527088": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "2e3bca44a1e4cbacc033efab61f8060de4088caec5bc6ffa1e646079e6c816be",
	                    "EndpointID": "85332bd248a94854f3c002117576a70eb8b061ec5650b85f60971885f7c29ab1",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "addons-527088",
	                        "164a3d8e2d60"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-527088 -n addons-527088
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-527088 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-527088 logs -n 25: (1.528519223s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-678265                                                                     | download-only-678265   | jenkins | v1.33.1 | 20 Jun 24 17:55 UTC | 20 Jun 24 17:55 UTC |
	| delete  | -p download-only-636496                                                                     | download-only-636496   | jenkins | v1.33.1 | 20 Jun 24 17:55 UTC | 20 Jun 24 17:55 UTC |
	| start   | --download-only -p                                                                          | download-docker-230526 | jenkins | v1.33.1 | 20 Jun 24 17:55 UTC |                     |
	|         | download-docker-230526                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |         |                     |                     |
	| delete  | -p download-docker-230526                                                                   | download-docker-230526 | jenkins | v1.33.1 | 20 Jun 24 17:55 UTC | 20 Jun 24 17:55 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-252165   | jenkins | v1.33.1 | 20 Jun 24 17:55 UTC |                     |
	|         | binary-mirror-252165                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:34775                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-252165                                                                     | binary-mirror-252165   | jenkins | v1.33.1 | 20 Jun 24 17:55 UTC | 20 Jun 24 17:55 UTC |
	| addons  | disable dashboard -p                                                                        | addons-527088          | jenkins | v1.33.1 | 20 Jun 24 17:55 UTC |                     |
	|         | addons-527088                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-527088          | jenkins | v1.33.1 | 20 Jun 24 17:55 UTC |                     |
	|         | addons-527088                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-527088 --wait=true                                                                | addons-527088          | jenkins | v1.33.1 | 20 Jun 24 17:55 UTC | 20 Jun 24 17:59 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-527088          | jenkins | v1.33.1 | 20 Jun 24 17:59 UTC | 20 Jun 24 17:59 UTC |
	|         | -p addons-527088                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-527088 ip                                                                            | addons-527088          | jenkins | v1.33.1 | 20 Jun 24 17:59 UTC | 20 Jun 24 17:59 UTC |
	| addons  | addons-527088 addons disable                                                                | addons-527088          | jenkins | v1.33.1 | 20 Jun 24 17:59 UTC | 20 Jun 24 17:59 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-527088          | jenkins | v1.33.1 | 20 Jun 24 17:59 UTC | 20 Jun 24 17:59 UTC |
	|         | -p addons-527088                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-527088 ssh cat                                                                       | addons-527088          | jenkins | v1.33.1 | 20 Jun 24 17:59 UTC | 20 Jun 24 17:59 UTC |
	|         | /opt/local-path-provisioner/pvc-6d99b03d-25b3-47de-a9a4-a709a8b14304_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-527088 addons disable                                                                | addons-527088          | jenkins | v1.33.1 | 20 Jun 24 17:59 UTC | 20 Jun 24 18:00 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-527088          | jenkins | v1.33.1 | 20 Jun 24 18:00 UTC | 20 Jun 24 18:00 UTC |
	|         | addons-527088                                                                               |                        |         |         |                     |                     |
	| addons  | addons-527088 addons                                                                        | addons-527088          | jenkins | v1.33.1 | 20 Jun 24 18:01 UTC | 20 Jun 24 18:01 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-527088 addons                                                                        | addons-527088          | jenkins | v1.33.1 | 20 Jun 24 18:01 UTC | 20 Jun 24 18:01 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-527088          | jenkins | v1.33.1 | 20 Jun 24 18:01 UTC | 20 Jun 24 18:01 UTC |
	|         | addons-527088                                                                               |                        |         |         |                     |                     |
	| addons  | addons-527088 addons                                                                        | addons-527088          | jenkins | v1.33.1 | 20 Jun 24 18:01 UTC | 20 Jun 24 18:01 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-527088 ssh curl -s                                                                   | addons-527088          | jenkins | v1.33.1 | 20 Jun 24 18:01 UTC | 20 Jun 24 18:01 UTC |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| addons  | addons-527088 addons disable                                                                | addons-527088          | jenkins | v1.33.1 | 20 Jun 24 18:01 UTC | 20 Jun 24 18:02 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| ip      | addons-527088 ip                                                                            | addons-527088          | jenkins | v1.33.1 | 20 Jun 24 18:01 UTC | 20 Jun 24 18:01 UTC |
	| addons  | addons-527088 addons disable                                                                | addons-527088          | jenkins | v1.33.1 | 20 Jun 24 18:02 UTC | 20 Jun 24 18:02 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-527088 addons disable                                                                | addons-527088          | jenkins | v1.33.1 | 20 Jun 24 18:02 UTC | 20 Jun 24 18:02 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/20 17:55:33
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0620 17:55:33.676175  280309 out.go:291] Setting OutFile to fd 1 ...
	I0620 17:55:33.676328  280309 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0620 17:55:33.676339  280309 out.go:304] Setting ErrFile to fd 2...
	I0620 17:55:33.676345  280309 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0620 17:55:33.676591  280309 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19106-274269/.minikube/bin
	I0620 17:55:33.677035  280309 out.go:298] Setting JSON to false
	I0620 17:55:33.677899  280309 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":5884,"bootTime":1718900250,"procs":162,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1063-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0620 17:55:33.677972  280309 start.go:139] virtualization:  
	I0620 17:55:33.680523  280309 out.go:177] * [addons-527088] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0620 17:55:33.683616  280309 out.go:177]   - MINIKUBE_LOCATION=19106
	I0620 17:55:33.683714  280309 notify.go:220] Checking for updates...
	I0620 17:55:33.688521  280309 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0620 17:55:33.690653  280309 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19106-274269/kubeconfig
	I0620 17:55:33.692609  280309 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19106-274269/.minikube
	I0620 17:55:33.694668  280309 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0620 17:55:33.696387  280309 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0620 17:55:33.698745  280309 driver.go:392] Setting default libvirt URI to qemu:///system
	I0620 17:55:33.729054  280309 docker.go:122] docker version: linux-26.1.4:Docker Engine - Community
	I0620 17:55:33.729192  280309 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0620 17:55:33.785162  280309 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-06-20 17:55:33.776356767 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1063-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214892544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d2d58213f83a351ca8f528a95fbd145f5654e957 Expected:d2d58213f83a351ca8f528a95fbd145f5654e957} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.1]] Warnings:<nil>}}
	I0620 17:55:33.785280  280309 docker.go:295] overlay module found
	I0620 17:55:33.787602  280309 out.go:177] * Using the docker driver based on user configuration
	I0620 17:55:33.789504  280309 start.go:297] selected driver: docker
	I0620 17:55:33.789524  280309 start.go:901] validating driver "docker" against <nil>
	I0620 17:55:33.789538  280309 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0620 17:55:33.790213  280309 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0620 17:55:33.837753  280309 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-06-20 17:55:33.828832581 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1063-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214892544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d2d58213f83a351ca8f528a95fbd145f5654e957 Expected:d2d58213f83a351ca8f528a95fbd145f5654e957} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.1]] Warnings:<nil>}}
	I0620 17:55:33.837908  280309 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0620 17:55:33.838133  280309 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0620 17:55:33.839910  280309 out.go:177] * Using Docker driver with root privileges
	I0620 17:55:33.841800  280309 cni.go:84] Creating CNI manager for ""
	I0620 17:55:33.841817  280309 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0620 17:55:33.841831  280309 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0620 17:55:33.841910  280309 start.go:340] cluster config:
	{Name:addons-527088 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718753665-19106@sha256:735aacbd61d487240dc39ba6e4d70dd6ae1ad6181ca2ba092d372605e48ee636 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:addons-527088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0620 17:55:33.844079  280309 out.go:177] * Starting "addons-527088" primary control-plane node in "addons-527088" cluster
	I0620 17:55:33.845819  280309 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0620 17:55:33.847679  280309 out.go:177] * Pulling base image v0.0.44-1718753665-19106 ...
	I0620 17:55:33.849571  280309 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime containerd
	I0620 17:55:33.849619  280309 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19106-274269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-containerd-overlay2-arm64.tar.lz4
	I0620 17:55:33.849632  280309 cache.go:56] Caching tarball of preloaded images
	I0620 17:55:33.849721  280309 preload.go:173] Found /home/jenkins/minikube-integration/19106-274269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0620 17:55:33.849735  280309 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on containerd
	I0620 17:55:33.850065  280309 profile.go:143] Saving config to /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/addons-527088/config.json ...
	I0620 17:55:33.850092  280309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/addons-527088/config.json: {Name:mkde65740042c86d88fe61f3726d00a1df575643 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0620 17:55:33.850198  280309 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718753665-19106@sha256:735aacbd61d487240dc39ba6e4d70dd6ae1ad6181ca2ba092d372605e48ee636 in local docker daemon
	I0620 17:55:33.864858  280309 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718753665-19106@sha256:735aacbd61d487240dc39ba6e4d70dd6ae1ad6181ca2ba092d372605e48ee636 to local cache
	I0620 17:55:33.864973  280309 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718753665-19106@sha256:735aacbd61d487240dc39ba6e4d70dd6ae1ad6181ca2ba092d372605e48ee636 in local cache directory
	I0620 17:55:33.864992  280309 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718753665-19106@sha256:735aacbd61d487240dc39ba6e4d70dd6ae1ad6181ca2ba092d372605e48ee636 in local cache directory, skipping pull
	I0620 17:55:33.864998  280309 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718753665-19106@sha256:735aacbd61d487240dc39ba6e4d70dd6ae1ad6181ca2ba092d372605e48ee636 exists in cache, skipping pull
	I0620 17:55:33.865005  280309 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718753665-19106@sha256:735aacbd61d487240dc39ba6e4d70dd6ae1ad6181ca2ba092d372605e48ee636 as a tarball
	I0620 17:55:33.865010  280309 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718753665-19106@sha256:735aacbd61d487240dc39ba6e4d70dd6ae1ad6181ca2ba092d372605e48ee636 from local cache
	I0620 17:55:50.586793  280309 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718753665-19106@sha256:735aacbd61d487240dc39ba6e4d70dd6ae1ad6181ca2ba092d372605e48ee636 from cached tarball
	I0620 17:55:50.586836  280309 cache.go:194] Successfully downloaded all kic artifacts
	I0620 17:55:50.586881  280309 start.go:360] acquireMachinesLock for addons-527088: {Name:mk15e6333c82ea8a692bf509d81c0dbfb71244c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0620 17:55:50.587678  280309 start.go:364] duration metric: took 774.259µs to acquireMachinesLock for "addons-527088"
	I0620 17:55:50.587733  280309 start.go:93] Provisioning new machine with config: &{Name:addons-527088 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718753665-19106@sha256:735aacbd61d487240dc39ba6e4d70dd6ae1ad6181ca2ba092d372605e48ee636 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:addons-527088 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0620 17:55:50.587812  280309 start.go:125] createHost starting for "" (driver="docker")
	I0620 17:55:50.590298  280309 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0620 17:55:50.590535  280309 start.go:159] libmachine.API.Create for "addons-527088" (driver="docker")
	I0620 17:55:50.590571  280309 client.go:168] LocalClient.Create starting
	I0620 17:55:50.590685  280309 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19106-274269/.minikube/certs/ca.pem
	I0620 17:55:50.868742  280309 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19106-274269/.minikube/certs/cert.pem
	I0620 17:55:51.231531  280309 cli_runner.go:164] Run: docker network inspect addons-527088 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0620 17:55:51.244861  280309 cli_runner.go:211] docker network inspect addons-527088 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0620 17:55:51.244944  280309 network_create.go:284] running [docker network inspect addons-527088] to gather additional debugging logs...
	I0620 17:55:51.244965  280309 cli_runner.go:164] Run: docker network inspect addons-527088
	W0620 17:55:51.258551  280309 cli_runner.go:211] docker network inspect addons-527088 returned with exit code 1
	I0620 17:55:51.258583  280309 network_create.go:287] error running [docker network inspect addons-527088]: docker network inspect addons-527088: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-527088 not found
	I0620 17:55:51.258597  280309 network_create.go:289] output of [docker network inspect addons-527088]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-527088 not found
	
	** /stderr **
	I0620 17:55:51.258706  280309 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0620 17:55:51.273594  280309 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001829090}
	I0620 17:55:51.273633  280309 network_create.go:124] attempt to create docker network addons-527088 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0620 17:55:51.273705  280309 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-527088 addons-527088
	I0620 17:55:51.329446  280309 network_create.go:108] docker network addons-527088 192.168.49.0/24 created
	I0620 17:55:51.329480  280309 kic.go:121] calculated static IP "192.168.49.2" for the "addons-527088" container
	I0620 17:55:51.329562  280309 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0620 17:55:51.342185  280309 cli_runner.go:164] Run: docker volume create addons-527088 --label name.minikube.sigs.k8s.io=addons-527088 --label created_by.minikube.sigs.k8s.io=true
	I0620 17:55:51.357228  280309 oci.go:103] Successfully created a docker volume addons-527088
	I0620 17:55:51.357329  280309 cli_runner.go:164] Run: docker run --rm --name addons-527088-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-527088 --entrypoint /usr/bin/test -v addons-527088:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718753665-19106@sha256:735aacbd61d487240dc39ba6e4d70dd6ae1ad6181ca2ba092d372605e48ee636 -d /var/lib
	I0620 17:55:52.682625  280309 cli_runner.go:217] Completed: docker run --rm --name addons-527088-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-527088 --entrypoint /usr/bin/test -v addons-527088:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718753665-19106@sha256:735aacbd61d487240dc39ba6e4d70dd6ae1ad6181ca2ba092d372605e48ee636 -d /var/lib: (1.325244116s)
	I0620 17:55:52.682656  280309 oci.go:107] Successfully prepared a docker volume addons-527088
	I0620 17:55:52.682682  280309 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime containerd
	I0620 17:55:52.682702  280309 kic.go:194] Starting extracting preloaded images to volume ...
	I0620 17:55:52.682790  280309 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19106-274269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-527088:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718753665-19106@sha256:735aacbd61d487240dc39ba6e4d70dd6ae1ad6181ca2ba092d372605e48ee636 -I lz4 -xf /preloaded.tar -C /extractDir
	I0620 17:55:56.841606  280309 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19106-274269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-527088:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718753665-19106@sha256:735aacbd61d487240dc39ba6e4d70dd6ae1ad6181ca2ba092d372605e48ee636 -I lz4 -xf /preloaded.tar -C /extractDir: (4.158774315s)
	I0620 17:55:56.841640  280309 kic.go:203] duration metric: took 4.158934733s to extract preloaded images to volume ...
	W0620 17:55:56.841792  280309 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0620 17:55:56.841900  280309 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0620 17:55:56.891174  280309 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-527088 --name addons-527088 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-527088 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-527088 --network addons-527088 --ip 192.168.49.2 --volume addons-527088:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718753665-19106@sha256:735aacbd61d487240dc39ba6e4d70dd6ae1ad6181ca2ba092d372605e48ee636
	I0620 17:55:57.236249  280309 cli_runner.go:164] Run: docker container inspect addons-527088 --format={{.State.Running}}
	I0620 17:55:57.280532  280309 cli_runner.go:164] Run: docker container inspect addons-527088 --format={{.State.Status}}
	I0620 17:55:57.296504  280309 cli_runner.go:164] Run: docker exec addons-527088 stat /var/lib/dpkg/alternatives/iptables
	I0620 17:55:57.364373  280309 oci.go:144] the created container "addons-527088" has a running status.
	I0620 17:55:57.364404  280309 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19106-274269/.minikube/machines/addons-527088/id_rsa...
	I0620 17:55:58.228815  280309 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19106-274269/.minikube/machines/addons-527088/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0620 17:55:58.253824  280309 cli_runner.go:164] Run: docker container inspect addons-527088 --format={{.State.Status}}
	I0620 17:55:58.271391  280309 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0620 17:55:58.271410  280309 kic_runner.go:114] Args: [docker exec --privileged addons-527088 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0620 17:55:58.325026  280309 cli_runner.go:164] Run: docker container inspect addons-527088 --format={{.State.Status}}
	I0620 17:55:58.343631  280309 machine.go:94] provisionDockerMachine start ...
	I0620 17:55:58.343872  280309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-527088
	I0620 17:55:58.364240  280309 main.go:141] libmachine: Using SSH client type: native
	I0620 17:55:58.364508  280309 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2bc0] 0x3e5420 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I0620 17:55:58.364525  280309 main.go:141] libmachine: About to run SSH command:
	hostname
	I0620 17:55:58.499942  280309 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-527088
	
	I0620 17:55:58.499969  280309 ubuntu.go:169] provisioning hostname "addons-527088"
	I0620 17:55:58.500035  280309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-527088
	I0620 17:55:58.521046  280309 main.go:141] libmachine: Using SSH client type: native
	I0620 17:55:58.521301  280309 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2bc0] 0x3e5420 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I0620 17:55:58.521320  280309 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-527088 && echo "addons-527088" | sudo tee /etc/hostname
	I0620 17:55:58.662300  280309 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-527088
	
	I0620 17:55:58.662394  280309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-527088
	I0620 17:55:58.678753  280309 main.go:141] libmachine: Using SSH client type: native
	I0620 17:55:58.679016  280309 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2bc0] 0x3e5420 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I0620 17:55:58.679060  280309 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-527088' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-527088/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-527088' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0620 17:55:58.806826  280309 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0620 17:55:58.806853  280309 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19106-274269/.minikube CaCertPath:/home/jenkins/minikube-integration/19106-274269/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19106-274269/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19106-274269/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19106-274269/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19106-274269/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19106-274269/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19106-274269/.minikube}
	I0620 17:55:58.806881  280309 ubuntu.go:177] setting up certificates
	I0620 17:55:58.806891  280309 provision.go:84] configureAuth start
	I0620 17:55:58.806952  280309 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-527088
	I0620 17:55:58.824730  280309 provision.go:143] copyHostCerts
	I0620 17:55:58.824810  280309 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19106-274269/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19106-274269/.minikube/cert.pem (1123 bytes)
	I0620 17:55:58.824937  280309 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19106-274269/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19106-274269/.minikube/key.pem (1679 bytes)
	I0620 17:55:58.825005  280309 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19106-274269/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19106-274269/.minikube/ca.pem (1082 bytes)
	I0620 17:55:58.825063  280309 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19106-274269/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19106-274269/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19106-274269/.minikube/certs/ca-key.pem org=jenkins.addons-527088 san=[127.0.0.1 192.168.49.2 addons-527088 localhost minikube]
	I0620 17:55:59.397406  280309 provision.go:177] copyRemoteCerts
	I0620 17:55:59.397472  280309 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0620 17:55:59.397529  280309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-527088
	I0620 17:55:59.414936  280309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/19106-274269/.minikube/machines/addons-527088/id_rsa Username:docker}
	I0620 17:55:59.507423  280309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19106-274269/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0620 17:55:59.530724  280309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19106-274269/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0620 17:55:59.553934  280309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19106-274269/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0620 17:55:59.577150  280309 provision.go:87] duration metric: took 770.244277ms to configureAuth
	I0620 17:55:59.577179  280309 ubuntu.go:193] setting minikube options for container-runtime
	I0620 17:55:59.577374  280309 config.go:182] Loaded profile config "addons-527088": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.2
	I0620 17:55:59.577388  280309 machine.go:97] duration metric: took 1.23373925s to provisionDockerMachine
	I0620 17:55:59.577395  280309 client.go:171] duration metric: took 8.986814097s to LocalClient.Create
	I0620 17:55:59.577409  280309 start.go:167] duration metric: took 8.986873937s to libmachine.API.Create "addons-527088"
	I0620 17:55:59.577419  280309 start.go:293] postStartSetup for "addons-527088" (driver="docker")
	I0620 17:55:59.577428  280309 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0620 17:55:59.577480  280309 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0620 17:55:59.577531  280309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-527088
	I0620 17:55:59.593080  280309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/19106-274269/.minikube/machines/addons-527088/id_rsa Username:docker}
	I0620 17:55:59.688252  280309 ssh_runner.go:195] Run: cat /etc/os-release
	I0620 17:55:59.691295  280309 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0620 17:55:59.691334  280309 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0620 17:55:59.691347  280309 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0620 17:55:59.691355  280309 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0620 17:55:59.691365  280309 filesync.go:126] Scanning /home/jenkins/minikube-integration/19106-274269/.minikube/addons for local assets ...
	I0620 17:55:59.691429  280309 filesync.go:126] Scanning /home/jenkins/minikube-integration/19106-274269/.minikube/files for local assets ...
	I0620 17:55:59.691458  280309 start.go:296] duration metric: took 114.033104ms for postStartSetup
	I0620 17:55:59.691764  280309 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-527088
	I0620 17:55:59.710005  280309 profile.go:143] Saving config to /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/addons-527088/config.json ...
	I0620 17:55:59.710294  280309 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0620 17:55:59.710344  280309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-527088
	I0620 17:55:59.725903  280309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/19106-274269/.minikube/machines/addons-527088/id_rsa Username:docker}
	I0620 17:55:59.815703  280309 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0620 17:55:59.820305  280309 start.go:128] duration metric: took 9.232477838s to createHost
	I0620 17:55:59.820330  280309 start.go:83] releasing machines lock for "addons-527088", held for 9.232631611s
	I0620 17:55:59.820405  280309 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-527088
	I0620 17:55:59.835331  280309 ssh_runner.go:195] Run: cat /version.json
	I0620 17:55:59.835367  280309 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0620 17:55:59.835395  280309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-527088
	I0620 17:55:59.835438  280309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-527088
	I0620 17:55:59.852472  280309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/19106-274269/.minikube/machines/addons-527088/id_rsa Username:docker}
	I0620 17:55:59.864800  280309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/19106-274269/.minikube/machines/addons-527088/id_rsa Username:docker}
	I0620 17:56:00.090030  280309 ssh_runner.go:195] Run: systemctl --version
	I0620 17:56:00.133971  280309 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0620 17:56:00.156053  280309 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0620 17:56:00.226390  280309 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0620 17:56:00.226491  280309 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0620 17:56:00.287899  280309 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0620 17:56:00.287942  280309 start.go:494] detecting cgroup driver to use...
	I0620 17:56:00.287990  280309 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0620 17:56:00.288103  280309 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0620 17:56:00.322718  280309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0620 17:56:00.337932  280309 docker.go:217] disabling cri-docker service (if available) ...
	I0620 17:56:00.338080  280309 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0620 17:56:00.357875  280309 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0620 17:56:00.380540  280309 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0620 17:56:00.484911  280309 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0620 17:56:00.575202  280309 docker.go:233] disabling docker service ...
	I0620 17:56:00.575274  280309 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0620 17:56:00.595835  280309 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0620 17:56:00.608347  280309 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0620 17:56:00.699748  280309 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0620 17:56:00.792698  280309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0620 17:56:00.804261  280309 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0620 17:56:00.819976  280309 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0620 17:56:00.829425  280309 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0620 17:56:00.838943  280309 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0620 17:56:00.839110  280309 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0620 17:56:00.848415  280309 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0620 17:56:00.857569  280309 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0620 17:56:00.867747  280309 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0620 17:56:00.877354  280309 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0620 17:56:00.886233  280309 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0620 17:56:00.895964  280309 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0620 17:56:00.905927  280309 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0620 17:56:00.915378  280309 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0620 17:56:00.924008  280309 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0620 17:56:00.932494  280309 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0620 17:56:01.010560  280309 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0620 17:56:01.155404  280309 start.go:541] Will wait 60s for socket path /run/containerd/containerd.sock
	I0620 17:56:01.155568  280309 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0620 17:56:01.160089  280309 start.go:562] Will wait 60s for crictl version
	I0620 17:56:01.160203  280309 ssh_runner.go:195] Run: which crictl
	I0620 17:56:01.164289  280309 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0620 17:56:01.203661  280309 start.go:578] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.33
	RuntimeApiVersion:  v1
	I0620 17:56:01.203749  280309 ssh_runner.go:195] Run: containerd --version
	I0620 17:56:01.224805  280309 ssh_runner.go:195] Run: containerd --version
	I0620 17:56:01.249387  280309 out.go:177] * Preparing Kubernetes v1.30.2 on containerd 1.6.33 ...
	I0620 17:56:01.251213  280309 cli_runner.go:164] Run: docker network inspect addons-527088 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0620 17:56:01.266421  280309 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0620 17:56:01.269973  280309 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0620 17:56:01.280484  280309 kubeadm.go:877] updating cluster {Name:addons-527088 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718753665-19106@sha256:735aacbd61d487240dc39ba6e4d70dd6ae1ad6181ca2ba092d372605e48ee636 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:addons-527088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0620 17:56:01.280608  280309 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime containerd
	I0620 17:56:01.280669  280309 ssh_runner.go:195] Run: sudo crictl images --output json
	I0620 17:56:01.314902  280309 containerd.go:627] all images are preloaded for containerd runtime.
	I0620 17:56:01.314927  280309 containerd.go:534] Images already preloaded, skipping extraction
	I0620 17:56:01.314987  280309 ssh_runner.go:195] Run: sudo crictl images --output json
	I0620 17:56:01.350406  280309 containerd.go:627] all images are preloaded for containerd runtime.
	I0620 17:56:01.350431  280309 cache_images.go:84] Images are preloaded, skipping loading
	I0620 17:56:01.350440  280309 kubeadm.go:928] updating node { 192.168.49.2 8443 v1.30.2 containerd true true} ...
	I0620 17:56:01.350566  280309 kubeadm.go:940] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-527088 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:addons-527088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0620 17:56:01.350647  280309 ssh_runner.go:195] Run: sudo crictl info
	I0620 17:56:01.387146  280309 cni.go:84] Creating CNI manager for ""
	I0620 17:56:01.387174  280309 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0620 17:56:01.387186  280309 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0620 17:56:01.387209  280309 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-527088 NodeName:addons-527088 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0620 17:56:01.387353  280309 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-527088"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0620 17:56:01.387428  280309 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0620 17:56:01.397439  280309 binaries.go:44] Found k8s binaries, skipping transfer
	I0620 17:56:01.397526  280309 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0620 17:56:01.406487  280309 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0620 17:56:01.426125  280309 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0620 17:56:01.445362  280309 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I0620 17:56:01.464518  280309 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0620 17:56:01.468020  280309 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0620 17:56:01.479634  280309 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0620 17:56:01.575489  280309 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0620 17:56:01.595620  280309 certs.go:68] Setting up /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/addons-527088 for IP: 192.168.49.2
	I0620 17:56:01.595646  280309 certs.go:194] generating shared ca certs ...
	I0620 17:56:01.595663  280309 certs.go:226] acquiring lock for ca certs: {Name:mk8b11ba3bc5463026cd3822a512e17542776a35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0620 17:56:01.595812  280309 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19106-274269/.minikube/ca.key
	I0620 17:56:01.952197  280309 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19106-274269/.minikube/ca.crt ...
	I0620 17:56:01.952228  280309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19106-274269/.minikube/ca.crt: {Name:mk6cc6ad4834592042dd358e26b723ccf81d8bf7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0620 17:56:01.952920  280309 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19106-274269/.minikube/ca.key ...
	I0620 17:56:01.952937  280309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19106-274269/.minikube/ca.key: {Name:mk3628639fd64c1612748d4583c1abd4307fd62c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0620 17:56:01.953030  280309 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19106-274269/.minikube/proxy-client-ca.key
	I0620 17:56:02.280297  280309 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19106-274269/.minikube/proxy-client-ca.crt ...
	I0620 17:56:02.280327  280309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19106-274269/.minikube/proxy-client-ca.crt: {Name:mk482932b4b12cade10d3b5817a8c79e7115dfee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0620 17:56:02.280509  280309 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19106-274269/.minikube/proxy-client-ca.key ...
	I0620 17:56:02.280522  280309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19106-274269/.minikube/proxy-client-ca.key: {Name:mk5d9689fd587ae140eddaad248e8c9ee2c1450a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0620 17:56:02.281330  280309 certs.go:256] generating profile certs ...
	I0620 17:56:02.281396  280309 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/addons-527088/client.key
	I0620 17:56:02.281424  280309 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/addons-527088/client.crt with IP's: []
	I0620 17:56:02.675706  280309 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/addons-527088/client.crt ...
	I0620 17:56:02.675737  280309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/addons-527088/client.crt: {Name:mk62dbfa4bba79daf1621c0a3f60c417587370a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0620 17:56:02.676431  280309 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/addons-527088/client.key ...
	I0620 17:56:02.676448  280309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/addons-527088/client.key: {Name:mkf7a09d40ed10bc3378495b349378099177ca2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0620 17:56:02.676543  280309 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/addons-527088/apiserver.key.98977655
	I0620 17:56:02.676562  280309 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/addons-527088/apiserver.crt.98977655 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0620 17:56:03.316626  280309 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/addons-527088/apiserver.crt.98977655 ...
	I0620 17:56:03.316659  280309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/addons-527088/apiserver.crt.98977655: {Name:mk84e22f128dac8111674620403a0b1dfdde6e38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0620 17:56:03.316850  280309 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/addons-527088/apiserver.key.98977655 ...
	I0620 17:56:03.316867  280309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/addons-527088/apiserver.key.98977655: {Name:mk34101255c5356d4e15ce12d446d9298b76d830 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0620 17:56:03.316952  280309 certs.go:381] copying /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/addons-527088/apiserver.crt.98977655 -> /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/addons-527088/apiserver.crt
	I0620 17:56:03.317043  280309 certs.go:385] copying /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/addons-527088/apiserver.key.98977655 -> /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/addons-527088/apiserver.key
	I0620 17:56:03.317100  280309 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/addons-527088/proxy-client.key
	I0620 17:56:03.317121  280309 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/addons-527088/proxy-client.crt with IP's: []
	I0620 17:56:03.698023  280309 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/addons-527088/proxy-client.crt ...
	I0620 17:56:03.698054  280309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/addons-527088/proxy-client.crt: {Name:mkbdbdddade18c44fd3334cd86bfd222a1adfe4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0620 17:56:03.698239  280309 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/addons-527088/proxy-client.key ...
	I0620 17:56:03.698254  280309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/addons-527088/proxy-client.key: {Name:mk782a6c820e29da909c8ac4d59a1399b7549339 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0620 17:56:03.698465  280309 certs.go:484] found cert: /home/jenkins/minikube-integration/19106-274269/.minikube/certs/ca-key.pem (1679 bytes)
	I0620 17:56:03.698506  280309 certs.go:484] found cert: /home/jenkins/minikube-integration/19106-274269/.minikube/certs/ca.pem (1082 bytes)
	I0620 17:56:03.698537  280309 certs.go:484] found cert: /home/jenkins/minikube-integration/19106-274269/.minikube/certs/cert.pem (1123 bytes)
	I0620 17:56:03.698565  280309 certs.go:484] found cert: /home/jenkins/minikube-integration/19106-274269/.minikube/certs/key.pem (1679 bytes)
	I0620 17:56:03.699186  280309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19106-274269/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0620 17:56:03.722332  280309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19106-274269/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0620 17:56:03.745978  280309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19106-274269/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0620 17:56:03.769000  280309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19106-274269/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0620 17:56:03.792261  280309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/addons-527088/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0620 17:56:03.815722  280309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/addons-527088/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0620 17:56:03.838411  280309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/addons-527088/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0620 17:56:03.862673  280309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/addons-527088/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0620 17:56:03.886071  280309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19106-274269/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0620 17:56:03.912645  280309 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0620 17:56:03.932401  280309 ssh_runner.go:195] Run: openssl version
	I0620 17:56:03.938370  280309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0620 17:56:03.951688  280309 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0620 17:56:03.954945  280309 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 20 17:56 /usr/share/ca-certificates/minikubeCA.pem
	I0620 17:56:03.955070  280309 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0620 17:56:03.961716  280309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0620 17:56:03.971031  280309 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0620 17:56:03.974217  280309 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0620 17:56:03.974280  280309 kubeadm.go:391] StartCluster: {Name:addons-527088 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718753665-19106@sha256:735aacbd61d487240dc39ba6e4d70dd6ae1ad6181ca2ba092d372605e48ee636 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:addons-527088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0620 17:56:03.974369  280309 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0620 17:56:03.974430  280309 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0620 17:56:04.014151  280309 cri.go:89] found id: ""
	I0620 17:56:04.014234  280309 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0620 17:56:04.024368  280309 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0620 17:56:04.034847  280309 kubeadm.go:213] ignoring SystemVerification for kubeadm because of docker driver
	I0620 17:56:04.034919  280309 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0620 17:56:04.044316  280309 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0620 17:56:04.044342  280309 kubeadm.go:156] found existing configuration files:
	
	I0620 17:56:04.044399  280309 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0620 17:56:04.054338  280309 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0620 17:56:04.054433  280309 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0620 17:56:04.063340  280309 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0620 17:56:04.072291  280309 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0620 17:56:04.072379  280309 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0620 17:56:04.080829  280309 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0620 17:56:04.089876  280309 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0620 17:56:04.089945  280309 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0620 17:56:04.098640  280309 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0620 17:56:04.107246  280309 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0620 17:56:04.107351  280309 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0620 17:56:04.116087  280309 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0620 17:56:04.161055  280309 kubeadm.go:309] [init] Using Kubernetes version: v1.30.2
	I0620 17:56:04.161256  280309 kubeadm.go:309] [preflight] Running pre-flight checks
	I0620 17:56:04.200759  280309 kubeadm.go:309] [preflight] The system verification failed. Printing the output from the verification:
	I0620 17:56:04.200856  280309 kubeadm.go:309] KERNEL_VERSION: 5.15.0-1063-aws
	I0620 17:56:04.200912  280309 kubeadm.go:309] OS: Linux
	I0620 17:56:04.200975  280309 kubeadm.go:309] CGROUPS_CPU: enabled
	I0620 17:56:04.201039  280309 kubeadm.go:309] CGROUPS_CPUACCT: enabled
	I0620 17:56:04.201103  280309 kubeadm.go:309] CGROUPS_CPUSET: enabled
	I0620 17:56:04.201170  280309 kubeadm.go:309] CGROUPS_DEVICES: enabled
	I0620 17:56:04.201309  280309 kubeadm.go:309] CGROUPS_FREEZER: enabled
	I0620 17:56:04.201411  280309 kubeadm.go:309] CGROUPS_MEMORY: enabled
	I0620 17:56:04.201518  280309 kubeadm.go:309] CGROUPS_PIDS: enabled
	I0620 17:56:04.201595  280309 kubeadm.go:309] CGROUPS_HUGETLB: enabled
	I0620 17:56:04.201664  280309 kubeadm.go:309] CGROUPS_BLKIO: enabled
	I0620 17:56:04.265456  280309 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0620 17:56:04.265566  280309 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0620 17:56:04.265662  280309 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0620 17:56:04.499488  280309 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0620 17:56:04.502657  280309 out.go:204]   - Generating certificates and keys ...
	I0620 17:56:04.502804  280309 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0620 17:56:04.502892  280309 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0620 17:56:05.193227  280309 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0620 17:56:05.655897  280309 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0620 17:56:06.273635  280309 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0620 17:56:06.526919  280309 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0620 17:56:07.467260  280309 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0620 17:56:07.467546  280309 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-527088 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0620 17:56:07.721619  280309 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0620 17:56:07.721924  280309 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-527088 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0620 17:56:08.028831  280309 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0620 17:56:08.657787  280309 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0620 17:56:09.054434  280309 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0620 17:56:09.054819  280309 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0620 17:56:09.559744  280309 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0620 17:56:09.795932  280309 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0620 17:56:10.300227  280309 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0620 17:56:10.551987  280309 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0620 17:56:10.971960  280309 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0620 17:56:10.972708  280309 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0620 17:56:10.975669  280309 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0620 17:56:10.978983  280309 out.go:204]   - Booting up control plane ...
	I0620 17:56:10.979102  280309 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0620 17:56:10.979179  280309 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0620 17:56:10.979243  280309 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0620 17:56:10.992762  280309 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0620 17:56:10.992859  280309 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0620 17:56:10.992898  280309 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0620 17:56:11.096563  280309 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0620 17:56:11.096669  280309 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0620 17:56:12.097707  280309 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001644435s
	I0620 17:56:12.097811  280309 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0620 17:56:17.599835  280309 kubeadm.go:309] [api-check] The API server is healthy after 5.502132203s
	I0620 17:56:17.618976  280309 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0620 17:56:17.632562  280309 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0620 17:56:17.656187  280309 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0620 17:56:17.656376  280309 kubeadm.go:309] [mark-control-plane] Marking the node addons-527088 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0620 17:56:17.667258  280309 kubeadm.go:309] [bootstrap-token] Using token: x5boz3.lyram88qg1d3zfqv
	I0620 17:56:17.669490  280309 out.go:204]   - Configuring RBAC rules ...
	I0620 17:56:17.669618  280309 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0620 17:56:17.673654  280309 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0620 17:56:17.683881  280309 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0620 17:56:17.687418  280309 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0620 17:56:17.691333  280309 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0620 17:56:17.695184  280309 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0620 17:56:18.012573  280309 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0620 17:56:18.434946  280309 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0620 17:56:19.012647  280309 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0620 17:56:19.013880  280309 kubeadm.go:309] 
	I0620 17:56:19.013955  280309 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0620 17:56:19.013969  280309 kubeadm.go:309] 
	I0620 17:56:19.014044  280309 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0620 17:56:19.014058  280309 kubeadm.go:309] 
	I0620 17:56:19.014085  280309 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0620 17:56:19.014149  280309 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0620 17:56:19.014204  280309 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0620 17:56:19.014213  280309 kubeadm.go:309] 
	I0620 17:56:19.014266  280309 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0620 17:56:19.014275  280309 kubeadm.go:309] 
	I0620 17:56:19.014321  280309 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0620 17:56:19.014329  280309 kubeadm.go:309] 
	I0620 17:56:19.014379  280309 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0620 17:56:19.014462  280309 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0620 17:56:19.014535  280309 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0620 17:56:19.014545  280309 kubeadm.go:309] 
	I0620 17:56:19.014626  280309 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0620 17:56:19.014703  280309 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0620 17:56:19.014712  280309 kubeadm.go:309] 
	I0620 17:56:19.014794  280309 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token x5boz3.lyram88qg1d3zfqv \
	I0620 17:56:19.014898  280309 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:31f44680df92f22e0bc7bade29ee85ca745ebbbe59d7df1f06ac217b73f7d837 \
	I0620 17:56:19.014924  280309 kubeadm.go:309] 	--control-plane 
	I0620 17:56:19.014932  280309 kubeadm.go:309] 
	I0620 17:56:19.015042  280309 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0620 17:56:19.015053  280309 kubeadm.go:309] 
	I0620 17:56:19.015132  280309 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token x5boz3.lyram88qg1d3zfqv \
	I0620 17:56:19.015234  280309 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:31f44680df92f22e0bc7bade29ee85ca745ebbbe59d7df1f06ac217b73f7d837 
	I0620 17:56:19.018322  280309 kubeadm.go:309] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1063-aws\n", err: exit status 1
	I0620 17:56:19.018488  280309 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0620 17:56:19.018516  280309 cni.go:84] Creating CNI manager for ""
	I0620 17:56:19.018525  280309 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0620 17:56:19.021747  280309 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0620 17:56:19.024284  280309 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0620 17:56:19.028307  280309 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
	I0620 17:56:19.028327  280309 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0620 17:56:19.049186  280309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0620 17:56:19.314231  280309 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0620 17:56:19.314321  280309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0620 17:56:19.314376  280309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-527088 minikube.k8s.io/updated_at=2024_06_20T17_56_19_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a5bfa5828b76fe92a3c5f89a54d8c76f6b5f3f8b minikube.k8s.io/name=addons-527088 minikube.k8s.io/primary=true
	I0620 17:56:19.329045  280309 ops.go:34] apiserver oom_adj: -16
	I0620 17:56:19.447340  280309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0620 17:56:19.947777  280309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0620 17:56:20.448250  280309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0620 17:56:20.948348  280309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0620 17:56:21.448029  280309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0620 17:56:21.948008  280309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0620 17:56:22.448354  280309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0620 17:56:22.948315  280309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0620 17:56:23.448146  280309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0620 17:56:23.947493  280309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0620 17:56:24.448295  280309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0620 17:56:24.947632  280309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0620 17:56:25.448268  280309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0620 17:56:25.948070  280309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0620 17:56:26.448003  280309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0620 17:56:26.947674  280309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0620 17:56:27.448169  280309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0620 17:56:27.948417  280309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0620 17:56:28.447535  280309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0620 17:56:28.947476  280309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0620 17:56:29.448031  280309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0620 17:56:29.948134  280309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0620 17:56:30.447998  280309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0620 17:56:30.948202  280309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0620 17:56:31.447514  280309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0620 17:56:31.947545  280309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0620 17:56:32.045574  280309 kubeadm.go:1107] duration metric: took 12.73131566s to wait for elevateKubeSystemPrivileges
	W0620 17:56:32.045606  280309 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0620 17:56:32.045614  280309 kubeadm.go:393] duration metric: took 28.071355174s to StartCluster
	I0620 17:56:32.045631  280309 settings.go:142] acquiring lock: {Name:mk5a1a69c9e50173b6bfe88004ea354d3f5ed8f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0620 17:56:32.046258  280309 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19106-274269/kubeconfig
	I0620 17:56:32.046664  280309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19106-274269/kubeconfig: {Name:mke344b955a4582ad77895759c31c36670e563b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0620 17:56:32.046870  280309 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0620 17:56:32.047035  280309 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0620 17:56:32.047313  280309 config.go:182] Loaded profile config "addons-527088": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.2
	I0620 17:56:32.047342  280309 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0620 17:56:32.047414  280309 addons.go:69] Setting yakd=true in profile "addons-527088"
	I0620 17:56:32.047437  280309 addons.go:234] Setting addon yakd=true in "addons-527088"
	I0620 17:56:32.047485  280309 host.go:66] Checking if "addons-527088" exists ...
	I0620 17:56:32.047930  280309 cli_runner.go:164] Run: docker container inspect addons-527088 --format={{.State.Status}}
	I0620 17:56:32.048467  280309 addons.go:69] Setting inspektor-gadget=true in profile "addons-527088"
	I0620 17:56:32.048494  280309 addons.go:234] Setting addon inspektor-gadget=true in "addons-527088"
	I0620 17:56:32.048520  280309 host.go:66] Checking if "addons-527088" exists ...
	I0620 17:56:32.048960  280309 cli_runner.go:164] Run: docker container inspect addons-527088 --format={{.State.Status}}
	I0620 17:56:32.049199  280309 addons.go:69] Setting metrics-server=true in profile "addons-527088"
	I0620 17:56:32.049569  280309 addons.go:234] Setting addon metrics-server=true in "addons-527088"
	I0620 17:56:32.049600  280309 host.go:66] Checking if "addons-527088" exists ...
	I0620 17:56:32.049357  280309 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-527088"
	I0620 17:56:32.049833  280309 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-527088"
	I0620 17:56:32.049906  280309 host.go:66] Checking if "addons-527088" exists ...
	I0620 17:56:32.050361  280309 cli_runner.go:164] Run: docker container inspect addons-527088 --format={{.State.Status}}
	I0620 17:56:32.049368  280309 addons.go:69] Setting registry=true in profile "addons-527088"
	I0620 17:56:32.051232  280309 addons.go:234] Setting addon registry=true in "addons-527088"
	I0620 17:56:32.051270  280309 host.go:66] Checking if "addons-527088" exists ...
	I0620 17:56:32.051733  280309 cli_runner.go:164] Run: docker container inspect addons-527088 --format={{.State.Status}}
	I0620 17:56:32.052360  280309 addons.go:69] Setting cloud-spanner=true in profile "addons-527088"
	I0620 17:56:32.052393  280309 addons.go:234] Setting addon cloud-spanner=true in "addons-527088"
	I0620 17:56:32.052430  280309 host.go:66] Checking if "addons-527088" exists ...
	I0620 17:56:32.052844  280309 cli_runner.go:164] Run: docker container inspect addons-527088 --format={{.State.Status}}
	I0620 17:56:32.057051  280309 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-527088"
	I0620 17:56:32.057207  280309 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-527088"
	I0620 17:56:32.057280  280309 host.go:66] Checking if "addons-527088" exists ...
	I0620 17:56:32.057880  280309 cli_runner.go:164] Run: docker container inspect addons-527088 --format={{.State.Status}}
	I0620 17:56:32.049373  280309 addons.go:69] Setting storage-provisioner=true in profile "addons-527088"
	I0620 17:56:32.059297  280309 addons.go:234] Setting addon storage-provisioner=true in "addons-527088"
	I0620 17:56:32.059378  280309 host.go:66] Checking if "addons-527088" exists ...
	I0620 17:56:32.059981  280309 cli_runner.go:164] Run: docker container inspect addons-527088 --format={{.State.Status}}
	I0620 17:56:32.049376  280309 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-527088"
	I0620 17:56:32.065297  280309 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-527088"
	I0620 17:56:32.065610  280309 cli_runner.go:164] Run: docker container inspect addons-527088 --format={{.State.Status}}
	I0620 17:56:32.072532  280309 addons.go:69] Setting default-storageclass=true in profile "addons-527088"
	I0620 17:56:32.072576  280309 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-527088"
	I0620 17:56:32.072894  280309 cli_runner.go:164] Run: docker container inspect addons-527088 --format={{.State.Status}}
	I0620 17:56:32.049382  280309 addons.go:69] Setting volcano=true in profile "addons-527088"
	I0620 17:56:32.079442  280309 addons.go:234] Setting addon volcano=true in "addons-527088"
	I0620 17:56:32.079486  280309 host.go:66] Checking if "addons-527088" exists ...
	I0620 17:56:32.079936  280309 cli_runner.go:164] Run: docker container inspect addons-527088 --format={{.State.Status}}
	I0620 17:56:32.087232  280309 addons.go:69] Setting gcp-auth=true in profile "addons-527088"
	I0620 17:56:32.087282  280309 mustload.go:65] Loading cluster: addons-527088
	I0620 17:56:32.087463  280309 config.go:182] Loaded profile config "addons-527088": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.2
	I0620 17:56:32.087729  280309 cli_runner.go:164] Run: docker container inspect addons-527088 --format={{.State.Status}}
	I0620 17:56:32.049386  280309 addons.go:69] Setting volumesnapshots=true in profile "addons-527088"
	I0620 17:56:32.099787  280309 addons.go:234] Setting addon volumesnapshots=true in "addons-527088"
	I0620 17:56:32.099837  280309 host.go:66] Checking if "addons-527088" exists ...
	I0620 17:56:32.100296  280309 cli_runner.go:164] Run: docker container inspect addons-527088 --format={{.State.Status}}
	I0620 17:56:32.116551  280309 cli_runner.go:164] Run: docker container inspect addons-527088 --format={{.State.Status}}
	I0620 17:56:32.117502  280309 addons.go:69] Setting ingress=true in profile "addons-527088"
	I0620 17:56:32.117538  280309 addons.go:234] Setting addon ingress=true in "addons-527088"
	I0620 17:56:32.117626  280309 host.go:66] Checking if "addons-527088" exists ...
	I0620 17:56:32.118341  280309 cli_runner.go:164] Run: docker container inspect addons-527088 --format={{.State.Status}}
	I0620 17:56:32.129422  280309 out.go:177] * Verifying Kubernetes components...
	I0620 17:56:32.132137  280309 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0620 17:56:32.147085  280309 addons.go:69] Setting ingress-dns=true in profile "addons-527088"
	I0620 17:56:32.147134  280309 addons.go:234] Setting addon ingress-dns=true in "addons-527088"
	I0620 17:56:32.147184  280309 host.go:66] Checking if "addons-527088" exists ...
	I0620 17:56:32.147632  280309 cli_runner.go:164] Run: docker container inspect addons-527088 --format={{.State.Status}}
	I0620 17:56:32.212491  280309 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	I0620 17:56:32.218330  280309 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0620 17:56:32.218396  280309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0620 17:56:32.218516  280309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-527088
	I0620 17:56:32.275407  280309 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0620 17:56:32.275681  280309 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0620 17:56:32.294657  280309 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.29.0
	I0620 17:56:32.309760  280309 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0620 17:56:32.309788  280309 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0620 17:56:32.309860  280309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-527088
	I0620 17:56:32.309997  280309 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0620 17:56:32.312047  280309 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0620 17:56:32.312064  280309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0620 17:56:32.312120  280309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-527088
	I0620 17:56:32.319299  280309 host.go:66] Checking if "addons-527088" exists ...
	I0620 17:56:32.322202  280309 addons.go:234] Setting addon default-storageclass=true in "addons-527088"
	I0620 17:56:32.322235  280309 host.go:66] Checking if "addons-527088" exists ...
	I0620 17:56:32.322641  280309 cli_runner.go:164] Run: docker container inspect addons-527088 --format={{.State.Status}}
	I0620 17:56:32.333483  280309 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0620 17:56:32.333657  280309 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0620 17:56:32.333670  280309 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0620 17:56:32.333741  280309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-527088
	I0620 17:56:32.340080  280309 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-527088"
	I0620 17:56:32.343144  280309 host.go:66] Checking if "addons-527088" exists ...
	I0620 17:56:32.343600  280309 cli_runner.go:164] Run: docker container inspect addons-527088 --format={{.State.Status}}
	I0620 17:56:32.349432  280309 out.go:177]   - Using image docker.io/registry:2.8.3
	I0620 17:56:32.349957  280309 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0620 17:56:32.359101  280309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0620 17:56:32.359180  280309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-527088
	I0620 17:56:32.371699  280309 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0620 17:56:32.371831  280309 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0620 17:56:32.373617  280309 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0620 17:56:32.375612  280309 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0620 17:56:32.375684  280309 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0620 17:56:32.379164  280309 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0620 17:56:32.379578  280309 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0620 17:56:32.380354  280309 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0620 17:56:32.380649  280309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/19106-274269/.minikube/machines/addons-527088/id_rsa Username:docker}
	I0620 17:56:32.383805  280309 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0620 17:56:32.383889  280309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0620 17:56:32.383968  280309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-527088
	I0620 17:56:32.399140  280309 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0620 17:56:32.401087  280309 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	I0620 17:56:32.402138  280309 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0620 17:56:32.402154  280309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0620 17:56:32.402244  280309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-527088
	I0620 17:56:32.404568  280309 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0620 17:56:32.404619  280309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0620 17:56:32.404708  280309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-527088
	I0620 17:56:32.411194  280309 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0620 17:56:32.411219  280309 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0620 17:56:32.411282  280309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-527088
	I0620 17:56:32.420349  280309 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0620 17:56:32.422779  280309 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0620 17:56:32.427780  280309 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.7.0
	I0620 17:56:32.429669  280309 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0620 17:56:32.429686  280309 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0620 17:56:32.429804  280309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-527088
	I0620 17:56:32.441051  280309 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0620 17:56:32.450673  280309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/19106-274269/.minikube/machines/addons-527088/id_rsa Username:docker}
	I0620 17:56:32.451962  280309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/19106-274269/.minikube/machines/addons-527088/id_rsa Username:docker}
	I0620 17:56:32.452453  280309 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.7.0
	I0620 17:56:32.454829  280309 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0620 17:56:32.457276  280309 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0620 17:56:32.457294  280309 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0620 17:56:32.457363  280309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-527088
	I0620 17:56:32.483068  280309 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.7.0
	I0620 17:56:32.484104  280309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/19106-274269/.minikube/machines/addons-527088/id_rsa Username:docker}
	I0620 17:56:32.488220  280309 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0620 17:56:32.488240  280309 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0620 17:56:32.488309  280309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-527088
	I0620 17:56:32.563131  280309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/19106-274269/.minikube/machines/addons-527088/id_rsa Username:docker}
	I0620 17:56:32.582582  280309 out.go:177]   - Using image docker.io/busybox:stable
	I0620 17:56:32.585349  280309 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0620 17:56:32.588963  280309 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0620 17:56:32.588994  280309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0620 17:56:32.589055  280309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-527088
	I0620 17:56:32.633894  280309 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0620 17:56:32.633979  280309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (626760 bytes)
	I0620 17:56:32.634089  280309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-527088
	I0620 17:56:32.659384  280309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/19106-274269/.minikube/machines/addons-527088/id_rsa Username:docker}
	I0620 17:56:32.663623  280309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/19106-274269/.minikube/machines/addons-527088/id_rsa Username:docker}
	I0620 17:56:32.664248  280309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/19106-274269/.minikube/machines/addons-527088/id_rsa Username:docker}
	I0620 17:56:32.664783  280309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/19106-274269/.minikube/machines/addons-527088/id_rsa Username:docker}
	I0620 17:56:32.679291  280309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/19106-274269/.minikube/machines/addons-527088/id_rsa Username:docker}
	I0620 17:56:32.687296  280309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/19106-274269/.minikube/machines/addons-527088/id_rsa Username:docker}
	I0620 17:56:32.687699  280309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/19106-274269/.minikube/machines/addons-527088/id_rsa Username:docker}
	I0620 17:56:32.689247  280309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/19106-274269/.minikube/machines/addons-527088/id_rsa Username:docker}
	I0620 17:56:32.700229  280309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/19106-274269/.minikube/machines/addons-527088/id_rsa Username:docker}
	W0620 17:56:32.701082  280309 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0620 17:56:32.701104  280309 retry.go:31] will retry after 332.014986ms: ssh: handshake failed: EOF
	I0620 17:56:33.035093  280309 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0620 17:56:33.035166  280309 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0620 17:56:33.099438  280309 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0620 17:56:33.099507  280309 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.052455887s)
	I0620 17:56:33.099642  280309 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0620 17:56:33.109271  280309 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0620 17:56:33.138309  280309 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0620 17:56:33.142337  280309 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0620 17:56:33.177744  280309 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0620 17:56:33.177820  280309 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0620 17:56:33.234121  280309 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0620 17:56:33.234200  280309 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0620 17:56:33.246022  280309 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0620 17:56:33.307764  280309 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0620 17:56:33.307840  280309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0620 17:56:33.339575  280309 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0620 17:56:33.339602  280309 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0620 17:56:33.369824  280309 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0620 17:56:33.373112  280309 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0620 17:56:33.373136  280309 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0620 17:56:33.398144  280309 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0620 17:56:33.402873  280309 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0620 17:56:33.402896  280309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0620 17:56:33.435740  280309 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0620 17:56:33.439380  280309 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0620 17:56:33.439410  280309 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0620 17:56:33.497310  280309 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0620 17:56:33.639729  280309 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0620 17:56:33.639806  280309 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0620 17:56:33.673420  280309 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0620 17:56:33.736179  280309 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0620 17:56:33.736254  280309 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0620 17:56:33.743646  280309 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0620 17:56:33.743717  280309 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0620 17:56:33.817280  280309 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0620 17:56:33.817357  280309 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0620 17:56:33.842757  280309 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0620 17:56:33.842835  280309 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0620 17:56:33.879552  280309 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0620 17:56:33.879630  280309 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0620 17:56:34.074393  280309 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0620 17:56:34.074414  280309 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0620 17:56:34.129780  280309 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0620 17:56:34.129799  280309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0620 17:56:34.137268  280309 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0620 17:56:34.137339  280309 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0620 17:56:34.179874  280309 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0620 17:56:34.179950  280309 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0620 17:56:34.214121  280309 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0620 17:56:34.214195  280309 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0620 17:56:34.288795  280309 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0620 17:56:34.323697  280309 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0620 17:56:34.323766  280309 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0620 17:56:34.397927  280309 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0620 17:56:34.509432  280309 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0620 17:56:34.509503  280309 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0620 17:56:34.547927  280309 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0620 17:56:34.547994  280309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0620 17:56:34.631697  280309 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0620 17:56:34.631776  280309 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0620 17:56:34.791640  280309 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0620 17:56:34.791720  280309 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0620 17:56:34.871324  280309 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0620 17:56:34.960665  280309 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0620 17:56:34.960737  280309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0620 17:56:35.077144  280309 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0620 17:56:35.077221  280309 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0620 17:56:35.079289  280309 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.979623616s)
	I0620 17:56:35.079366  280309 start.go:946] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0620 17:56:35.079542  280309 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.980079344s)
	I0620 17:56:35.080474  280309 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.971125965s)
	I0620 17:56:35.081190  280309 node_ready.go:35] waiting up to 6m0s for node "addons-527088" to be "Ready" ...
	I0620 17:56:35.086114  280309 node_ready.go:49] node "addons-527088" has status "Ready":"True"
	I0620 17:56:35.086207  280309 node_ready.go:38] duration metric: took 4.72416ms for node "addons-527088" to be "Ready" ...
	I0620 17:56:35.086233  280309 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0620 17:56:35.106995  280309 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-kblwf" in "kube-system" namespace to be "Ready" ...
	I0620 17:56:35.229523  280309 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0620 17:56:35.403428  280309 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.26503191s)
	I0620 17:56:35.403550  280309 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.261148733s)
	I0620 17:56:35.449121  280309 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0620 17:56:35.449193  280309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0620 17:56:35.584482  280309 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-527088" context rescaled to 1 replicas
	I0620 17:56:35.610106  280309 pod_ready.go:97] error getting pod "coredns-7db6d8ff4d-kblwf" in "kube-system" namespace (skipping!): pods "coredns-7db6d8ff4d-kblwf" not found
	I0620 17:56:35.610139  280309 pod_ready.go:81] duration metric: took 503.042544ms for pod "coredns-7db6d8ff4d-kblwf" in "kube-system" namespace to be "Ready" ...
	E0620 17:56:35.610153  280309 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-7db6d8ff4d-kblwf" in "kube-system" namespace (skipping!): pods "coredns-7db6d8ff4d-kblwf" not found
	I0620 17:56:35.610160  280309 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-zjv5x" in "kube-system" namespace to be "Ready" ...
	I0620 17:56:35.685275  280309 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0620 17:56:35.685347  280309 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0620 17:56:35.777632  280309 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.531527867s)
	I0620 17:56:35.869242  280309 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.499382011s)
	I0620 17:56:35.944439  280309 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0620 17:56:35.944508  280309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0620 17:56:35.995722  280309 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0620 17:56:35.995793  280309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0620 17:56:36.254439  280309 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0620 17:56:36.254516  280309 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0620 17:56:36.456052  280309 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0620 17:56:37.619030  280309 pod_ready.go:102] pod "coredns-7db6d8ff4d-zjv5x" in "kube-system" namespace has status "Ready":"False"
	I0620 17:56:39.560269  280309 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0620 17:56:39.560423  280309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-527088
	I0620 17:56:39.594038  280309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/19106-274269/.minikube/machines/addons-527088/id_rsa Username:docker}
	I0620 17:56:40.050339  280309 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0620 17:56:40.117930  280309 pod_ready.go:102] pod "coredns-7db6d8ff4d-zjv5x" in "kube-system" namespace has status "Ready":"False"
	I0620 17:56:40.251705  280309 addons.go:234] Setting addon gcp-auth=true in "addons-527088"
	I0620 17:56:40.251760  280309 host.go:66] Checking if "addons-527088" exists ...
	I0620 17:56:40.252253  280309 cli_runner.go:164] Run: docker container inspect addons-527088 --format={{.State.Status}}
	I0620 17:56:40.278489  280309 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0620 17:56:40.278556  280309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-527088
	I0620 17:56:40.304492  280309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/19106-274269/.minikube/machines/addons-527088/id_rsa Username:docker}
	I0620 17:56:40.797677  280309 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.399494087s)
	I0620 17:56:40.797722  280309 addons.go:475] Verifying addon ingress=true in "addons-527088"
	I0620 17:56:40.797876  280309 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.36211117s)
	I0620 17:56:40.800441  280309 out.go:177] * Verifying ingress addon...
	I0620 17:56:40.803362  280309 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0620 17:56:40.809703  280309 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0620 17:56:40.809730  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:56:41.325474  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:56:41.872523  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:56:42.187232  280309 pod_ready.go:102] pod "coredns-7db6d8ff4d-zjv5x" in "kube-system" namespace has status "Ready":"False"
	I0620 17:56:42.328370  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:56:42.533363  280309 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (9.035966494s)
	I0620 17:56:42.533457  280309 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.859956974s)
	I0620 17:56:42.533476  280309 addons.go:475] Verifying addon registry=true in "addons-527088"
	I0620 17:56:42.533750  280309 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.24482058s)
	I0620 17:56:42.533772  280309 addons.go:475] Verifying addon metrics-server=true in "addons-527088"
	I0620 17:56:42.533829  280309 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.135824309s)
	I0620 17:56:42.533975  280309 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.662577617s)
	W0620 17:56:42.534238  280309 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0620 17:56:42.534256  280309 retry.go:31] will retry after 255.76992ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0620 17:56:42.534033  280309 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.304435364s)
	I0620 17:56:42.535690  280309 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-527088 service yakd-dashboard -n yakd-dashboard
	
	I0620 17:56:42.535849  280309 out.go:177] * Verifying registry addon...
	I0620 17:56:42.539147  280309 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0620 17:56:42.546457  280309 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0620 17:56:42.546535  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:56:42.790443  280309 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0620 17:56:42.828760  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:56:43.004683  280309 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.54852961s)
	I0620 17:56:43.004728  280309 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-527088"
	I0620 17:56:43.005297  280309 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.726774167s)
	I0620 17:56:43.007990  280309 out.go:177] * Verifying csi-hostpath-driver addon...
	I0620 17:56:43.008141  280309 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0620 17:56:43.011066  280309 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0620 17:56:43.011870  280309 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0620 17:56:43.013193  280309 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0620 17:56:43.013238  280309 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0620 17:56:43.032948  280309 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0620 17:56:43.032971  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:56:43.050958  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:56:43.144472  280309 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0620 17:56:43.144497  280309 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0620 17:56:43.194732  280309 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0620 17:56:43.194757  280309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0620 17:56:43.253281  280309 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0620 17:56:43.307327  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:56:43.520460  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:56:43.546779  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:56:43.809068  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:56:44.018994  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:56:44.044343  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:56:44.225130  280309 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.434639515s)
	I0620 17:56:44.315202  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:56:44.405910  280309 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.152591462s)
	I0620 17:56:44.408589  280309 addons.go:475] Verifying addon gcp-auth=true in "addons-527088"
	I0620 17:56:44.412943  280309 out.go:177] * Verifying gcp-auth addon...
	I0620 17:56:44.415664  280309 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0620 17:56:44.418120  280309 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0620 17:56:44.523499  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:56:44.545637  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:56:44.617186  280309 pod_ready.go:102] pod "coredns-7db6d8ff4d-zjv5x" in "kube-system" namespace has status "Ready":"False"
	I0620 17:56:44.807717  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:56:45.027523  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:56:45.051798  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:56:45.309822  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:56:45.518074  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:56:45.544892  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:56:45.808451  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:56:46.018512  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:56:46.044390  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:56:46.307855  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:56:46.519444  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:56:46.556741  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:56:46.622085  280309 pod_ready.go:102] pod "coredns-7db6d8ff4d-zjv5x" in "kube-system" namespace has status "Ready":"False"
	I0620 17:56:46.809165  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:56:47.018260  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:56:47.044670  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:56:47.308186  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:56:47.517838  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:56:47.545973  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:56:47.808489  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:56:48.019466  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:56:48.044897  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:56:48.307983  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:56:48.517881  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:56:48.545807  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:56:48.809655  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:56:49.017681  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:56:49.044490  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:56:49.117581  280309 pod_ready.go:102] pod "coredns-7db6d8ff4d-zjv5x" in "kube-system" namespace has status "Ready":"False"
	I0620 17:56:49.307890  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:56:49.523124  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:56:49.544828  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:56:49.808397  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:56:50.018006  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:56:50.044687  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:56:50.308437  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:56:50.517560  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:56:50.547716  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:56:50.808528  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:56:51.017837  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:56:51.044901  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:56:51.118389  280309 pod_ready.go:102] pod "coredns-7db6d8ff4d-zjv5x" in "kube-system" namespace has status "Ready":"False"
	I0620 17:56:51.309116  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:56:51.520305  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:56:51.543767  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:56:51.820827  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:56:52.017519  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:56:52.045581  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:56:52.307442  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:56:52.517480  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:56:52.546180  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:56:52.808198  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:56:53.017315  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:56:53.043552  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:56:53.307809  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:56:53.518094  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:56:53.545052  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:56:53.622839  280309 pod_ready.go:102] pod "coredns-7db6d8ff4d-zjv5x" in "kube-system" namespace has status "Ready":"False"
	I0620 17:56:53.807802  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:56:54.018284  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:56:54.045017  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:56:54.308130  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:56:54.517908  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:56:54.546149  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:56:54.813741  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:56:55.018823  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:56:55.044105  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:56:55.307668  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:56:55.517959  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:56:55.544694  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:56:55.807869  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:56:56.019236  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:56:56.044039  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:56:56.116278  280309 pod_ready.go:102] pod "coredns-7db6d8ff4d-zjv5x" in "kube-system" namespace has status "Ready":"False"
	I0620 17:56:56.307905  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:56:56.519203  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:56:56.544096  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:56:56.808246  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:56:57.018967  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:56:57.044220  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:56:57.308575  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:56:57.517812  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:56:57.544480  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:56:57.808377  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:56:58.019263  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:56:58.044042  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:56:58.308102  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:56:58.518310  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:56:58.547187  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:56:58.615603  280309 pod_ready.go:102] pod "coredns-7db6d8ff4d-zjv5x" in "kube-system" namespace has status "Ready":"False"
	I0620 17:56:58.808114  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:56:59.018361  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:56:59.044088  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:56:59.307508  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:56:59.517898  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:56:59.545518  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:56:59.808320  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:57:00.040597  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:00.084797  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:00.309094  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:57:00.517823  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:00.546435  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:00.616998  280309 pod_ready.go:102] pod "coredns-7db6d8ff4d-zjv5x" in "kube-system" namespace has status "Ready":"False"
	I0620 17:57:00.808446  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:57:01.017564  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:01.043710  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:01.311985  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:57:01.517402  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:01.545314  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:01.808398  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:57:02.017627  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:02.043929  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:02.308169  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:57:02.517694  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:02.544845  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:02.808025  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:57:03.027073  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:03.044728  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:03.117048  280309 pod_ready.go:102] pod "coredns-7db6d8ff4d-zjv5x" in "kube-system" namespace has status "Ready":"False"
	I0620 17:57:03.308348  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:57:03.517956  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:03.544394  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:03.808665  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:57:04.018126  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:04.044853  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:04.307584  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:57:04.523988  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:04.554323  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:04.807986  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:57:05.017770  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:05.044307  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:05.117776  280309 pod_ready.go:102] pod "coredns-7db6d8ff4d-zjv5x" in "kube-system" namespace has status "Ready":"False"
	I0620 17:57:05.308215  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:57:05.522496  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:05.544510  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:05.808051  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:57:06.020908  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:06.046375  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:06.308401  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:57:06.517829  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:06.546858  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:06.807893  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:57:07.017901  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:07.044741  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:07.308318  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:57:07.518851  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:07.552851  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:07.617794  280309 pod_ready.go:102] pod "coredns-7db6d8ff4d-zjv5x" in "kube-system" namespace has status "Ready":"False"
	I0620 17:57:07.816052  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:57:08.020056  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:08.045662  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:08.308864  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:57:08.518374  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:08.548887  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:08.808261  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:57:09.019329  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:09.055182  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:09.308602  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:57:09.518883  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:09.550315  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:09.625517  280309 pod_ready.go:102] pod "coredns-7db6d8ff4d-zjv5x" in "kube-system" namespace has status "Ready":"False"
	I0620 17:57:09.808551  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:57:10.020573  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:10.044770  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:10.308486  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:57:10.520777  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:10.545149  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:10.808840  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:57:11.018457  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:11.044094  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:11.117072  280309 pod_ready.go:92] pod "coredns-7db6d8ff4d-zjv5x" in "kube-system" namespace has status "Ready":"True"
	I0620 17:57:11.117147  280309 pod_ready.go:81] duration metric: took 35.506977486s for pod "coredns-7db6d8ff4d-zjv5x" in "kube-system" namespace to be "Ready" ...
	I0620 17:57:11.117175  280309 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-527088" in "kube-system" namespace to be "Ready" ...
	I0620 17:57:11.122720  280309 pod_ready.go:92] pod "etcd-addons-527088" in "kube-system" namespace has status "Ready":"True"
	I0620 17:57:11.122794  280309 pod_ready.go:81] duration metric: took 5.595683ms for pod "etcd-addons-527088" in "kube-system" namespace to be "Ready" ...
	I0620 17:57:11.122824  280309 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-527088" in "kube-system" namespace to be "Ready" ...
	I0620 17:57:11.128363  280309 pod_ready.go:92] pod "kube-apiserver-addons-527088" in "kube-system" namespace has status "Ready":"True"
	I0620 17:57:11.128450  280309 pod_ready.go:81] duration metric: took 5.603773ms for pod "kube-apiserver-addons-527088" in "kube-system" namespace to be "Ready" ...
	I0620 17:57:11.128478  280309 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-527088" in "kube-system" namespace to be "Ready" ...
	I0620 17:57:11.133913  280309 pod_ready.go:92] pod "kube-controller-manager-addons-527088" in "kube-system" namespace has status "Ready":"True"
	I0620 17:57:11.133983  280309 pod_ready.go:81] duration metric: took 5.48283ms for pod "kube-controller-manager-addons-527088" in "kube-system" namespace to be "Ready" ...
	I0620 17:57:11.134010  280309 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-q4j7l" in "kube-system" namespace to be "Ready" ...
	I0620 17:57:11.139526  280309 pod_ready.go:92] pod "kube-proxy-q4j7l" in "kube-system" namespace has status "Ready":"True"
	I0620 17:57:11.139597  280309 pod_ready.go:81] duration metric: took 5.565102ms for pod "kube-proxy-q4j7l" in "kube-system" namespace to be "Ready" ...
	I0620 17:57:11.139624  280309 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-527088" in "kube-system" namespace to be "Ready" ...
	I0620 17:57:11.308547  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:57:11.514435  280309 pod_ready.go:92] pod "kube-scheduler-addons-527088" in "kube-system" namespace has status "Ready":"True"
	I0620 17:57:11.514467  280309 pod_ready.go:81] duration metric: took 374.818875ms for pod "kube-scheduler-addons-527088" in "kube-system" namespace to be "Ready" ...
	I0620 17:57:11.514479  280309 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-kmqzl" in "kube-system" namespace to be "Ready" ...
	I0620 17:57:11.518137  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:11.550865  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:11.808307  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:57:12.028242  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:12.044174  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:12.307671  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:57:12.518123  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:12.551967  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:12.808732  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:57:13.025433  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:13.045870  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:13.309629  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:57:13.520864  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:13.524016  280309 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-kmqzl" in "kube-system" namespace has status "Ready":"False"
	I0620 17:57:13.545870  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:13.807452  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:57:14.018543  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:14.047558  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:14.308356  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:57:14.520081  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:14.544827  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:14.808709  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:57:15.024209  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:15.045025  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:15.308793  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:57:15.522107  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:15.528873  280309 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-kmqzl" in "kube-system" namespace has status "Ready":"False"
	I0620 17:57:15.550206  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:15.808956  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:57:16.023859  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:16.048575  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:16.309549  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:57:16.520331  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:16.547124  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:16.808499  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:57:17.018916  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:17.044486  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:17.308617  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:57:17.521140  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:17.547131  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:17.808040  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:57:18.028521  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:18.031989  280309 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-kmqzl" in "kube-system" namespace has status "Ready":"False"
	I0620 17:57:18.045777  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:18.308515  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:57:18.517521  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:18.545200  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:18.808253  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:57:19.023240  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:19.046684  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:19.308506  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:57:19.523129  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:19.550879  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:19.810077  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:57:20.020125  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:20.046464  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:20.319309  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:57:20.518249  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:20.522658  280309 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-kmqzl" in "kube-system" namespace has status "Ready":"False"
	I0620 17:57:20.545415  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:20.808019  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:57:21.030182  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:21.045508  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:21.307395  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:57:21.518465  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:21.545051  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:21.809885  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:57:22.019215  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:22.046554  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:22.308502  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:57:22.518517  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:22.547652  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:22.808314  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:57:23.019743  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:23.024192  280309 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-kmqzl" in "kube-system" namespace has status "Ready":"False"
	I0620 17:57:23.048363  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:23.308486  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:57:23.517877  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:23.544870  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:23.807879  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:57:24.018652  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:24.050579  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:24.308345  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:57:24.517568  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:24.545273  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:24.808160  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:57:25.020175  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:25.025823  280309 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-kmqzl" in "kube-system" namespace has status "Ready":"False"
	I0620 17:57:25.044852  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:25.308119  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:57:25.520056  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:25.545736  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:25.808223  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:57:26.019784  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:26.043853  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:26.307534  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:57:26.517476  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:26.545841  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:26.817351  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:57:27.022186  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:27.044388  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:27.308233  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:57:27.529142  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:27.533582  280309 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-kmqzl" in "kube-system" namespace has status "Ready":"False"
	I0620 17:57:27.554323  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:27.807783  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:57:28.023905  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:28.045239  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:28.307616  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:57:28.518646  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:28.545920  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:28.808515  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:57:29.017156  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:29.043936  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:29.308732  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:57:29.520500  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:29.547868  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:29.808138  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:57:30.027247  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:30.031224  280309 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-kmqzl" in "kube-system" namespace has status "Ready":"False"
	I0620 17:57:30.045548  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:30.308669  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:57:30.519272  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:30.545043  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:30.808649  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:57:31.026203  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:31.043875  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:31.307449  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:57:31.529911  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:31.533529  280309 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-kmqzl" in "kube-system" namespace has status "Ready":"True"
	I0620 17:57:31.533564  280309 pod_ready.go:81] duration metric: took 20.019075521s for pod "nvidia-device-plugin-daemonset-kmqzl" in "kube-system" namespace to be "Ready" ...
	I0620 17:57:31.533574  280309 pod_ready.go:38] duration metric: took 56.447289421s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0620 17:57:31.533615  280309 api_server.go:52] waiting for apiserver process to appear ...
	I0620 17:57:31.533768  280309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0620 17:57:31.545209  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:31.559739  280309 api_server.go:72] duration metric: took 59.512839659s to wait for apiserver process to appear ...
	I0620 17:57:31.559766  280309 api_server.go:88] waiting for apiserver healthz status ...
	I0620 17:57:31.559790  280309 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0620 17:57:31.567378  280309 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0620 17:57:31.568307  280309 api_server.go:141] control plane version: v1.30.2
	I0620 17:57:31.568332  280309 api_server.go:131] duration metric: took 8.557909ms to wait for apiserver health ...
	I0620 17:57:31.568341  280309 system_pods.go:43] waiting for kube-system pods to appear ...
	I0620 17:57:31.578624  280309 system_pods.go:59] 18 kube-system pods found
	I0620 17:57:31.578659  280309 system_pods.go:61] "coredns-7db6d8ff4d-zjv5x" [4cf84221-65ab-4a1d-98ec-8b9ad0408a6f] Running
	I0620 17:57:31.578669  280309 system_pods.go:61] "csi-hostpath-attacher-0" [7000805e-511c-4847-b0f8-4769c0773542] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0620 17:57:31.578699  280309 system_pods.go:61] "csi-hostpath-resizer-0" [2da9613d-e95a-4858-b78d-0bdc69926466] Running
	I0620 17:57:31.578715  280309 system_pods.go:61] "csi-hostpathplugin-x6mpj" [724be1fc-8f0b-4658-8e42-a97172ccf1e6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0620 17:57:31.578721  280309 system_pods.go:61] "etcd-addons-527088" [718f75d3-e3b1-42b4-86a1-27b9df5e7066] Running
	I0620 17:57:31.578729  280309 system_pods.go:61] "kindnet-pb4v5" [9cc554fe-30b7-48da-99d6-0d0d2351d523] Running
	I0620 17:57:31.578733  280309 system_pods.go:61] "kube-apiserver-addons-527088" [ac062148-0092-47b7-b46e-3ce5ff0733a4] Running
	I0620 17:57:31.578737  280309 system_pods.go:61] "kube-controller-manager-addons-527088" [335cc0db-147e-40f1-9236-46b637406fb5] Running
	I0620 17:57:31.578743  280309 system_pods.go:61] "kube-ingress-dns-minikube" [6d0ce51f-1380-413e-828f-bf90177784f8] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0620 17:57:31.578750  280309 system_pods.go:61] "kube-proxy-q4j7l" [9b779c40-d9f3-43c7-be6a-04b4281ebb2b] Running
	I0620 17:57:31.578754  280309 system_pods.go:61] "kube-scheduler-addons-527088" [d58ab987-35fc-4dc5-ba76-0d8d5f9bd2a3] Running
	I0620 17:57:31.578765  280309 system_pods.go:61] "metrics-server-c59844bb4-4cgsc" [07aa09b6-d3f1-4d78-9265-b99c9de1ab03] Running
	I0620 17:57:31.578770  280309 system_pods.go:61] "nvidia-device-plugin-daemonset-kmqzl" [251afb28-9d62-40dc-807a-5b8184c7ca8e] Running
	I0620 17:57:31.578775  280309 system_pods.go:61] "registry-6dksb" [3be3b3bd-56ca-4f7d-9f6b-057cc5818b82] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0620 17:57:31.578781  280309 system_pods.go:61] "registry-proxy-kdb4q" [5cbce6b3-228b-4728-b364-a7df88294438] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0620 17:57:31.578789  280309 system_pods.go:61] "snapshot-controller-745499f584-2jdsr" [12e9e2bc-316c-42ca-a02d-5c9524753b18] Running
	I0620 17:57:31.578793  280309 system_pods.go:61] "snapshot-controller-745499f584-r7zqx" [eb8921ee-8831-448a-b8fb-83cd31bda188] Running
	I0620 17:57:31.578799  280309 system_pods.go:61] "storage-provisioner" [3772e548-1fac-4c4e-9014-4e181f2b6e40] Running
	I0620 17:57:31.578806  280309 system_pods.go:74] duration metric: took 10.458689ms to wait for pod list to return data ...
	I0620 17:57:31.578821  280309 default_sa.go:34] waiting for default service account to be created ...
	I0620 17:57:31.581158  280309 default_sa.go:45] found service account: "default"
	I0620 17:57:31.581184  280309 default_sa.go:55] duration metric: took 2.356016ms for default service account to be created ...
	I0620 17:57:31.581194  280309 system_pods.go:116] waiting for k8s-apps to be running ...
	I0620 17:57:31.592001  280309 system_pods.go:86] 18 kube-system pods found
	I0620 17:57:31.592040  280309 system_pods.go:89] "coredns-7db6d8ff4d-zjv5x" [4cf84221-65ab-4a1d-98ec-8b9ad0408a6f] Running
	I0620 17:57:31.592051  280309 system_pods.go:89] "csi-hostpath-attacher-0" [7000805e-511c-4847-b0f8-4769c0773542] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0620 17:57:31.592057  280309 system_pods.go:89] "csi-hostpath-resizer-0" [2da9613d-e95a-4858-b78d-0bdc69926466] Running
	I0620 17:57:31.592066  280309 system_pods.go:89] "csi-hostpathplugin-x6mpj" [724be1fc-8f0b-4658-8e42-a97172ccf1e6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0620 17:57:31.592071  280309 system_pods.go:89] "etcd-addons-527088" [718f75d3-e3b1-42b4-86a1-27b9df5e7066] Running
	I0620 17:57:31.592076  280309 system_pods.go:89] "kindnet-pb4v5" [9cc554fe-30b7-48da-99d6-0d0d2351d523] Running
	I0620 17:57:31.592086  280309 system_pods.go:89] "kube-apiserver-addons-527088" [ac062148-0092-47b7-b46e-3ce5ff0733a4] Running
	I0620 17:57:31.592092  280309 system_pods.go:89] "kube-controller-manager-addons-527088" [335cc0db-147e-40f1-9236-46b637406fb5] Running
	I0620 17:57:31.592101  280309 system_pods.go:89] "kube-ingress-dns-minikube" [6d0ce51f-1380-413e-828f-bf90177784f8] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0620 17:57:31.592106  280309 system_pods.go:89] "kube-proxy-q4j7l" [9b779c40-d9f3-43c7-be6a-04b4281ebb2b] Running
	I0620 17:57:31.592113  280309 system_pods.go:89] "kube-scheduler-addons-527088" [d58ab987-35fc-4dc5-ba76-0d8d5f9bd2a3] Running
	I0620 17:57:31.592118  280309 system_pods.go:89] "metrics-server-c59844bb4-4cgsc" [07aa09b6-d3f1-4d78-9265-b99c9de1ab03] Running
	I0620 17:57:31.592122  280309 system_pods.go:89] "nvidia-device-plugin-daemonset-kmqzl" [251afb28-9d62-40dc-807a-5b8184c7ca8e] Running
	I0620 17:57:31.592130  280309 system_pods.go:89] "registry-6dksb" [3be3b3bd-56ca-4f7d-9f6b-057cc5818b82] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0620 17:57:31.592136  280309 system_pods.go:89] "registry-proxy-kdb4q" [5cbce6b3-228b-4728-b364-a7df88294438] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0620 17:57:31.592144  280309 system_pods.go:89] "snapshot-controller-745499f584-2jdsr" [12e9e2bc-316c-42ca-a02d-5c9524753b18] Running
	I0620 17:57:31.592149  280309 system_pods.go:89] "snapshot-controller-745499f584-r7zqx" [eb8921ee-8831-448a-b8fb-83cd31bda188] Running
	I0620 17:57:31.592153  280309 system_pods.go:89] "storage-provisioner" [3772e548-1fac-4c4e-9014-4e181f2b6e40] Running
	I0620 17:57:31.592159  280309 system_pods.go:126] duration metric: took 10.960407ms to wait for k8s-apps to be running ...
	I0620 17:57:31.592175  280309 system_svc.go:44] waiting for kubelet service to be running ....
	I0620 17:57:31.592239  280309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0620 17:57:31.611586  280309 system_svc.go:56] duration metric: took 19.400999ms WaitForService to wait for kubelet
	I0620 17:57:31.611627  280309 kubeadm.go:576] duration metric: took 59.564732318s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0620 17:57:31.611648  280309 node_conditions.go:102] verifying NodePressure condition ...
	I0620 17:57:31.614617  280309 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0620 17:57:31.614652  280309 node_conditions.go:123] node cpu capacity is 2
	I0620 17:57:31.614673  280309 node_conditions.go:105] duration metric: took 3.01154ms to run NodePressure ...
	I0620 17:57:31.614687  280309 start.go:240] waiting for startup goroutines ...
	I0620 17:57:31.807644  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:57:32.017813  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:32.046211  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:32.308082  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:57:32.518908  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:32.547389  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:32.808359  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:57:33.018096  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:33.043901  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:33.308612  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:57:33.518927  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:33.545124  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:33.808387  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:57:34.018246  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:34.044286  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:34.308181  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:57:34.517679  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:34.545808  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:34.816608  280309 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0620 17:57:35.018827  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:35.044448  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:35.308534  280309 kapi.go:107] duration metric: took 54.505171352s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0620 17:57:35.519268  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:35.547346  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:36.020179  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:36.045261  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:36.517813  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:36.545493  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:37.020005  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:37.046239  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:37.517475  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:37.546620  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:38.020584  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:38.044897  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:38.517441  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:38.545270  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:39.020401  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:39.047353  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:39.518545  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:39.545270  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:40.019526  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:40.045328  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:40.518044  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:40.545405  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:41.017727  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:41.044853  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:41.522525  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:41.559962  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:42.019270  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:42.044305  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:42.519309  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:42.546083  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:43.019774  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:43.044954  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:43.517296  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:43.548416  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:44.027037  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:44.044816  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:44.518150  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:44.545200  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:45.026475  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:45.046814  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0620 17:57:45.518265  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:45.546408  280309 kapi.go:107] duration metric: took 1m3.007267873s to wait for kubernetes.io/minikube-addons=registry ...
	I0620 17:57:46.018214  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:46.517620  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:47.018054  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:47.517573  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:48.024856  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:48.517579  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:49.018453  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:49.517994  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:50.021695  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:50.518159  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:51.020857  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:51.518189  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:52.019785  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0620 17:57:52.517635  280309 kapi.go:107] duration metric: took 1m9.505762199s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0620 17:58:07.419765  280309 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0620 17:58:07.419792  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:07.920362  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:08.423969  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:08.919922  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:09.419583  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:09.919051  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:10.419986  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:10.919839  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:11.419176  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:11.919630  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:12.419052  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:12.920058  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:13.419331  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:13.919590  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:14.420075  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:14.919984  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:15.419103  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:15.919782  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:16.419391  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:16.919265  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:17.419622  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:17.919197  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:18.419825  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:18.919779  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:19.419386  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:19.919651  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:20.418936  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:20.919560  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:21.419257  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:21.919528  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:22.418933  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:22.920188  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:23.419597  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:23.921890  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:24.420795  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:24.919850  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:25.419321  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:25.919037  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:26.420142  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:26.919225  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:27.418896  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:27.918883  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:28.419681  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:28.919275  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:29.418634  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:29.919083  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:30.420052  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:30.919959  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:31.419853  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:31.919681  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:32.419129  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:32.919339  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:33.419165  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:33.919319  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:34.418836  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:34.920153  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:35.418832  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:35.919163  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:36.419750  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:36.919592  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:37.419765  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:37.919072  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:38.419142  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:38.920377  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:39.418886  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:39.919682  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:40.419508  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:40.918981  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:41.419120  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:41.920302  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:42.424864  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:42.919302  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:43.419444  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:43.920287  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:44.418992  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:44.919588  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:45.419036  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:45.920074  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:46.419855  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:46.918965  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:47.419801  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:47.922715  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:48.420721  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:48.919690  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:49.419399  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:49.919448  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:50.419786  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:50.919471  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:51.419244  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:51.919243  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:52.419239  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:52.920173  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:53.419503  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:53.919809  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:54.419703  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:54.919650  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:55.420576  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:55.919545  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:56.418862  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:56.919753  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:57.418927  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:57.920182  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:58.419418  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:58.920187  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:59.418990  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:58:59.919596  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:59:00.419417  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:59:00.919261  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:59:01.419271  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:59:01.919157  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:59:02.419323  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:59:02.919082  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:59:03.419879  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:59:03.918993  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:59:04.420228  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:59:04.925796  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:59:05.419387  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:59:05.919955  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:59:06.419087  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:59:06.920250  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:59:07.419268  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:59:07.919502  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:59:08.419305  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:59:08.919736  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:59:09.419269  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:59:09.918941  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:59:10.419551  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:59:10.919845  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:59:11.419850  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:59:11.919414  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:59:12.420780  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:59:12.920063  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:59:13.419967  280309 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0620 17:59:13.919692  280309 kapi.go:107] duration metric: took 2m29.504026201s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0620 17:59:13.921889  280309 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-527088 cluster.
	I0620 17:59:13.924046  280309 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0620 17:59:13.926585  280309 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0620 17:59:13.928653  280309 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, default-storageclass, storage-provisioner, ingress-dns, storage-provisioner-rancher, volcano, metrics-server, inspektor-gadget, yakd, volumesnapshots, ingress, registry, csi-hostpath-driver, gcp-auth
	I0620 17:59:13.930766  280309 addons.go:510] duration metric: took 2m41.883414375s for enable addons: enabled=[nvidia-device-plugin cloud-spanner default-storageclass storage-provisioner ingress-dns storage-provisioner-rancher volcano metrics-server inspektor-gadget yakd volumesnapshots ingress registry csi-hostpath-driver gcp-auth]
	I0620 17:59:13.930826  280309 start.go:245] waiting for cluster config update ...
	I0620 17:59:13.930850  280309 start.go:254] writing updated cluster config ...
	I0620 17:59:13.931208  280309 ssh_runner.go:195] Run: rm -f paused
	I0620 17:59:14.303512  280309 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0620 17:59:14.308672  280309 out.go:177] * Done! kubectl is now configured to use "addons-527088" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	296f80b16e7ab       dd1b12fcb6097       6 seconds ago       Exited              hello-world-app           2                   4a09e75d6ad4c       hello-world-app-86c47465fc-jmkgs
	f971856aedd42       11ceee7cdc572       31 seconds ago      Running             nginx                     0                   9940ac0fffe54       test-job-nginx-0
	048603ea277d4       4f49228258b64       31 seconds ago      Running             nginx                     0                   a1b933665575e       nginx
	9d47a3662714d       9e1a67634369d       3 minutes ago       Running             headlamp                  0                   f763b5167e2a1       headlamp-7fc69f7444-kpc7m
	928c7c9b3c66e       6ef582f3ec844       3 minutes ago       Running             gcp-auth                  0                   c9113abbefdc0       gcp-auth-5db96cd9b4-j8tw4
	073fd3862ac6c       20e3f2db01e81       5 minutes ago       Running             yakd                      0                   f62f541b5ced0       yakd-dashboard-5ddbf7d777-4rbvh
	c8230811897b5       2437cf7621777       5 minutes ago       Running             coredns                   0                   0d15079f0e277       coredns-7db6d8ff4d-zjv5x
	94c4e30569abc       ba04bb24b9575       5 minutes ago       Running             storage-provisioner       0                   709104aa02267       storage-provisioner
	80eed4fc3e04b       66dbb96a9149f       5 minutes ago       Running             kube-proxy                0                   2ea9c140bccb8       kube-proxy-q4j7l
	130c8a36b5148       89d73d416b992       5 minutes ago       Running             kindnet-cni               0                   49f3ebda1ebba       kindnet-pb4v5
	79307dac4e2a2       c7dd04b1bafeb       6 minutes ago       Running             kube-scheduler            0                   0ccce19806ceb       kube-scheduler-addons-527088
	4ff37ab342bce       e1dcc3400d3ea       6 minutes ago       Running             kube-controller-manager   0                   f24e3c2023012       kube-controller-manager-addons-527088
	48eeef76de846       84c601f3f72c8       6 minutes ago       Running             kube-apiserver            0                   0c2996ffc1f8c       kube-apiserver-addons-527088
	82b8ad7ba1d66       014faa467e297       6 minutes ago       Running             etcd                      0                   e2af0d7281c80       etcd-addons-527088
	
	
	==> containerd <==
	Jun 20 18:02:19 addons-527088 containerd[767]: time="2024-06-20T18:02:19.164325932Z" level=info msg="RemovePodSandbox for \"fd1fc787ee9629b1d64a1ecb9a212610dbda9660d7746ce541f78ae2bc0985d0\""
	Jun 20 18:02:19 addons-527088 containerd[767]: time="2024-06-20T18:02:19.164366777Z" level=info msg="Forcibly stopping sandbox \"fd1fc787ee9629b1d64a1ecb9a212610dbda9660d7746ce541f78ae2bc0985d0\""
	Jun 20 18:02:19 addons-527088 containerd[767]: time="2024-06-20T18:02:19.171992971Z" level=info msg="TearDown network for sandbox \"fd1fc787ee9629b1d64a1ecb9a212610dbda9660d7746ce541f78ae2bc0985d0\" successfully"
	Jun 20 18:02:19 addons-527088 containerd[767]: time="2024-06-20T18:02:19.178101054Z" level=info msg="RemovePodSandbox \"fd1fc787ee9629b1d64a1ecb9a212610dbda9660d7746ce541f78ae2bc0985d0\" returns successfully"
	Jun 20 18:02:19 addons-527088 containerd[767]: time="2024-06-20T18:02:19.178683483Z" level=info msg="StopPodSandbox for \"6a91e6a916904b14cdde291f2bbf4210a5350ea943b41819e1bde63f7a9e95bf\""
	Jun 20 18:02:19 addons-527088 containerd[767]: time="2024-06-20T18:02:19.178791306Z" level=info msg="TearDown network for sandbox \"6a91e6a916904b14cdde291f2bbf4210a5350ea943b41819e1bde63f7a9e95bf\" successfully"
	Jun 20 18:02:19 addons-527088 containerd[767]: time="2024-06-20T18:02:19.178812492Z" level=info msg="StopPodSandbox for \"6a91e6a916904b14cdde291f2bbf4210a5350ea943b41819e1bde63f7a9e95bf\" returns successfully"
	Jun 20 18:02:19 addons-527088 containerd[767]: time="2024-06-20T18:02:19.179276390Z" level=info msg="RemovePodSandbox for \"6a91e6a916904b14cdde291f2bbf4210a5350ea943b41819e1bde63f7a9e95bf\""
	Jun 20 18:02:19 addons-527088 containerd[767]: time="2024-06-20T18:02:19.179421464Z" level=info msg="Forcibly stopping sandbox \"6a91e6a916904b14cdde291f2bbf4210a5350ea943b41819e1bde63f7a9e95bf\""
	Jun 20 18:02:19 addons-527088 containerd[767]: time="2024-06-20T18:02:19.179512582Z" level=info msg="TearDown network for sandbox \"6a91e6a916904b14cdde291f2bbf4210a5350ea943b41819e1bde63f7a9e95bf\" successfully"
	Jun 20 18:02:19 addons-527088 containerd[767]: time="2024-06-20T18:02:19.184625126Z" level=info msg="RemovePodSandbox \"6a91e6a916904b14cdde291f2bbf4210a5350ea943b41819e1bde63f7a9e95bf\" returns successfully"
	Jun 20 18:02:19 addons-527088 containerd[767]: time="2024-06-20T18:02:19.185125020Z" level=info msg="StopPodSandbox for \"9431661cf5597b8807cd4074dcba32b7f143a83df154cfbd8f542ea5620342b1\""
	Jun 20 18:02:19 addons-527088 containerd[767]: time="2024-06-20T18:02:19.192784248Z" level=info msg="TearDown network for sandbox \"9431661cf5597b8807cd4074dcba32b7f143a83df154cfbd8f542ea5620342b1\" successfully"
	Jun 20 18:02:19 addons-527088 containerd[767]: time="2024-06-20T18:02:19.192943886Z" level=info msg="StopPodSandbox for \"9431661cf5597b8807cd4074dcba32b7f143a83df154cfbd8f542ea5620342b1\" returns successfully"
	Jun 20 18:02:19 addons-527088 containerd[767]: time="2024-06-20T18:02:19.193494923Z" level=info msg="RemovePodSandbox for \"9431661cf5597b8807cd4074dcba32b7f143a83df154cfbd8f542ea5620342b1\""
	Jun 20 18:02:19 addons-527088 containerd[767]: time="2024-06-20T18:02:19.193534890Z" level=info msg="Forcibly stopping sandbox \"9431661cf5597b8807cd4074dcba32b7f143a83df154cfbd8f542ea5620342b1\""
	Jun 20 18:02:19 addons-527088 containerd[767]: time="2024-06-20T18:02:19.200534733Z" level=info msg="TearDown network for sandbox \"9431661cf5597b8807cd4074dcba32b7f143a83df154cfbd8f542ea5620342b1\" successfully"
	Jun 20 18:02:19 addons-527088 containerd[767]: time="2024-06-20T18:02:19.205983423Z" level=info msg="RemovePodSandbox \"9431661cf5597b8807cd4074dcba32b7f143a83df154cfbd8f542ea5620342b1\" returns successfully"
	Jun 20 18:02:19 addons-527088 containerd[767]: time="2024-06-20T18:02:19.206889411Z" level=info msg="StopPodSandbox for \"cedff79261bbbe1dc449bea3d9ff8d8d3d550bdeee8f598b7973bc5f9cb18cbd\""
	Jun 20 18:02:19 addons-527088 containerd[767]: time="2024-06-20T18:02:19.213832483Z" level=info msg="TearDown network for sandbox \"cedff79261bbbe1dc449bea3d9ff8d8d3d550bdeee8f598b7973bc5f9cb18cbd\" successfully"
	Jun 20 18:02:19 addons-527088 containerd[767]: time="2024-06-20T18:02:19.213872098Z" level=info msg="StopPodSandbox for \"cedff79261bbbe1dc449bea3d9ff8d8d3d550bdeee8f598b7973bc5f9cb18cbd\" returns successfully"
	Jun 20 18:02:19 addons-527088 containerd[767]: time="2024-06-20T18:02:19.214584036Z" level=info msg="RemovePodSandbox for \"cedff79261bbbe1dc449bea3d9ff8d8d3d550bdeee8f598b7973bc5f9cb18cbd\""
	Jun 20 18:02:19 addons-527088 containerd[767]: time="2024-06-20T18:02:19.214727313Z" level=info msg="Forcibly stopping sandbox \"cedff79261bbbe1dc449bea3d9ff8d8d3d550bdeee8f598b7973bc5f9cb18cbd\""
	Jun 20 18:02:19 addons-527088 containerd[767]: time="2024-06-20T18:02:19.248039229Z" level=info msg="TearDown network for sandbox \"cedff79261bbbe1dc449bea3d9ff8d8d3d550bdeee8f598b7973bc5f9cb18cbd\" successfully"
	Jun 20 18:02:19 addons-527088 containerd[767]: time="2024-06-20T18:02:19.256871595Z" level=info msg="RemovePodSandbox \"cedff79261bbbe1dc449bea3d9ff8d8d3d550bdeee8f598b7973bc5f9cb18cbd\" returns successfully"
	
	
	==> coredns [c8230811897b59f4e4a7bc34d3927e64509c544df7ff05d2da6826e36ddd05d8] <==
	[INFO] 10.244.0.16:59112 - 43545 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000495636s
	[INFO] 10.244.0.16:59112 - 48165 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.003954617s
	[INFO] 10.244.0.16:55738 - 60278 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.004171601s
	[INFO] 10.244.0.16:59112 - 19255 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.011682817s
	[INFO] 10.244.0.16:55738 - 5683 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.011188879s
	[INFO] 10.244.0.16:59112 - 16127 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000215761s
	[INFO] 10.244.0.16:55738 - 53662 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000059979s
	[INFO] 10.244.0.16:34344 - 2603 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000161336s
	[INFO] 10.244.0.16:48358 - 53255 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000103007s
	[INFO] 10.244.0.16:48358 - 42669 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000161074s
	[INFO] 10.244.0.16:34344 - 61960 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000151835s
	[INFO] 10.244.0.16:48358 - 44705 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000066035s
	[INFO] 10.244.0.16:34344 - 5224 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00002903s
	[INFO] 10.244.0.16:48358 - 4416 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000058748s
	[INFO] 10.244.0.16:34344 - 59708 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000028381s
	[INFO] 10.244.0.16:48358 - 25529 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000049681s
	[INFO] 10.244.0.16:34344 - 38692 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000062753s
	[INFO] 10.244.0.16:34344 - 8611 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000168286s
	[INFO] 10.244.0.16:48358 - 11902 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000082076s
	[INFO] 10.244.0.16:34344 - 21979 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001910791s
	[INFO] 10.244.0.16:48358 - 42826 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002627421s
	[INFO] 10.244.0.16:34344 - 44157 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001376131s
	[INFO] 10.244.0.16:48358 - 48305 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002151536s
	[INFO] 10.244.0.16:34344 - 16094 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000090453s
	[INFO] 10.244.0.16:48358 - 17820 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000073435s
	
	
	==> describe nodes <==
	Name:               addons-527088
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-527088
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a5bfa5828b76fe92a3c5f89a54d8c76f6b5f3f8b
	                    minikube.k8s.io/name=addons-527088
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_20T17_56_19_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-527088
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Jun 2024 17:56:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-527088
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Jun 2024 18:02:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Jun 2024 18:01:24 +0000   Thu, 20 Jun 2024 17:56:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Jun 2024 18:01:24 +0000   Thu, 20 Jun 2024 17:56:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Jun 2024 18:01:24 +0000   Thu, 20 Jun 2024 17:56:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Jun 2024 18:01:24 +0000   Thu, 20 Jun 2024 17:56:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-527088
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022356Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022356Ki
	  pods:               110
	System Info:
	  Machine ID:                 7601478fa49b4cc49f4fba4c3f62364c
	  System UUID:                f60eab70-496a-46a7-9100-1103eea39fd0
	  Boot ID:                    53ebbd48-d2f2-463f-9f24-ddaca7e7841c
	  Kernel Version:             5.15.0-1063-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.33
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-86c47465fc-jmkgs         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         34s
	  gcp-auth                    gcp-auth-5db96cd9b4-j8tw4                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m14s
	  headlamp                    headlamp-7fc69f7444-kpc7m                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m6s
	  kube-system                 coredns-7db6d8ff4d-zjv5x                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     5m49s
	  kube-system                 etcd-addons-527088                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         6m3s
	  kube-system                 kindnet-pb4v5                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      5m49s
	  kube-system                 kube-apiserver-addons-527088             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m3s
	  kube-system                 kube-controller-manager-addons-527088    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m3s
	  kube-system                 kube-proxy-q4j7l                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m49s
	  kube-system                 kube-scheduler-addons-527088             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m3s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m46s
	  my-volcano                  test-job-nginx-0                         1 (50%!)(MISSING)       1 (50%!)(MISSING)     0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m39s
	  yakd-dashboard              yakd-dashboard-5ddbf7d777-4rbvh          0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     5m43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1850m (92%!)(MISSING)  1100m (55%!)(MISSING)
	  memory             348Mi (4%!)(MISSING)   476Mi (6%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 5m47s  kube-proxy       
	  Normal  Starting                 6m3s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m3s   kubelet          Node addons-527088 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m3s   kubelet          Node addons-527088 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m3s   kubelet          Node addons-527088 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             6m3s   kubelet          Node addons-527088 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  6m3s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m3s   kubelet          Node addons-527088 status is now: NodeReady
	  Normal  RegisteredNode           5m50s  node-controller  Node addons-527088 event: Registered Node addons-527088 in Controller
	
	
	==> dmesg <==
	[  +0.000918] FS-Cache: N-cookie d=00000000c97d82c6{9p.inode} n=000000000911b3ca
	[  +0.001018] FS-Cache: N-key=[8] '96385c0100000000'
	[  +0.003384] FS-Cache: Duplicate cookie detected
	[  +0.000668] FS-Cache: O-cookie c=00000006 [p=00000003 fl=226 nc=0 na=1]
	[  +0.000961] FS-Cache: O-cookie d=00000000c97d82c6{9p.inode} n=00000000059264de
	[  +0.001079] FS-Cache: O-key=[8] '96385c0100000000'
	[  +0.000700] FS-Cache: N-cookie c=0000000d [p=00000003 fl=2 nc=0 na=1]
	[  +0.000926] FS-Cache: N-cookie d=00000000c97d82c6{9p.inode} n=00000000e57313d6
	[  +0.001036] FS-Cache: N-key=[8] '96385c0100000000'
	[  +2.769583] FS-Cache: Duplicate cookie detected
	[  +0.000716] FS-Cache: O-cookie c=00000004 [p=00000003 fl=226 nc=0 na=1]
	[  +0.000968] FS-Cache: O-cookie d=00000000c97d82c6{9p.inode} n=00000000d831c478
	[  +0.001073] FS-Cache: O-key=[8] '95385c0100000000'
	[  +0.000757] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.000968] FS-Cache: N-cookie d=00000000c97d82c6{9p.inode} n=0000000023b5ed72
	[  +0.001062] FS-Cache: N-key=[8] '95385c0100000000'
	[  +0.266070] FS-Cache: Duplicate cookie detected
	[  +0.000709] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.000974] FS-Cache: O-cookie d=00000000c97d82c6{9p.inode} n=000000009e8a7ac6
	[  +0.001080] FS-Cache: O-key=[8] 'a1385c0100000000'
	[  +0.000737] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.001016] FS-Cache: N-cookie d=00000000c97d82c6{9p.inode} n=000000000911b3ca
	[  +0.001174] FS-Cache: N-key=[8] 'a1385c0100000000'
	[Jun20 17:00] hrtimer: interrupt took 3509863 ns
	[Jun20 17:25] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [82b8ad7ba1d66adfa091007914897f22a20c29576e59822eca2ebe24c7899cc7] <==
	{"level":"info","ts":"2024-06-20T17:56:12.488034Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-06-20T17:56:12.488103Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-06-20T17:56:12.50372Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-06-20T17:56:12.504114Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-06-20T17:56:12.504238Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-06-20T17:56:12.504451Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-06-20T17:56:12.504563Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-06-20T17:56:13.275095Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-06-20T17:56:13.275227Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-06-20T17:56:13.275295Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-06-20T17:56:13.275394Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-06-20T17:56:13.275537Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-06-20T17:56:13.275624Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-06-20T17:56:13.275717Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-06-20T17:56:13.276734Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-527088 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-20T17:56:13.276971Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-20T17:56:13.277358Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-20T17:56:13.287026Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-20T17:56:13.288807Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-06-20T17:56:13.299108Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-20T17:56:13.299357Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-20T17:56:13.299501Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-20T17:56:13.291033Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-20T17:56:13.299941Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-20T17:56:13.292477Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> gcp-auth [928c7c9b3c66eb36f71a0788d3894918e5364863e4c9acd6566d9e404d05fd7a] <==
	2024/06/20 17:59:15 Ready to write response ...
	2024/06/20 17:59:15 Ready to marshal response ...
	2024/06/20 17:59:15 Ready to write response ...
	2024/06/20 17:59:15 Ready to marshal response ...
	2024/06/20 17:59:15 Ready to write response ...
	2024/06/20 17:59:25 Ready to marshal response ...
	2024/06/20 17:59:25 Ready to write response ...
	2024/06/20 17:59:41 Ready to marshal response ...
	2024/06/20 17:59:41 Ready to write response ...
	2024/06/20 17:59:42 Ready to marshal response ...
	2024/06/20 17:59:42 Ready to write response ...
	2024/06/20 17:59:42 Ready to marshal response ...
	2024/06/20 17:59:42 Ready to write response ...
	2024/06/20 17:59:42 Ready to marshal response ...
	2024/06/20 17:59:42 Ready to write response ...
	2024/06/20 17:59:50 Ready to marshal response ...
	2024/06/20 17:59:50 Ready to write response ...
	2024/06/20 18:00:52 Ready to marshal response ...
	2024/06/20 18:00:52 Ready to write response ...
	2024/06/20 18:01:12 Ready to marshal response ...
	2024/06/20 18:01:12 Ready to write response ...
	2024/06/20 18:01:47 Ready to marshal response ...
	2024/06/20 18:01:47 Ready to write response ...
	2024/06/20 18:01:55 Ready to marshal response ...
	2024/06/20 18:01:55 Ready to write response ...
	
	
	==> kernel <==
	 18:02:21 up  1:44,  0 users,  load average: 0.95, 1.88, 2.65
	Linux addons-527088 5.15.0-1063-aws #69~20.04.1-Ubuntu SMP Fri May 10 19:21:30 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [130c8a36b5148bd8d8ec349ee70d1ffb0d37a1f660e1a5397db3ebae4a7acb8f] <==
	I0620 18:00:13.579044       1 main.go:227] handling current node
	I0620 18:00:23.582691       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0620 18:00:23.582720       1 main.go:227] handling current node
	I0620 18:00:33.587207       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0620 18:00:33.587233       1 main.go:227] handling current node
	I0620 18:00:43.590867       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0620 18:00:43.590898       1 main.go:227] handling current node
	I0620 18:00:53.596432       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0620 18:00:53.596466       1 main.go:227] handling current node
	I0620 18:01:03.609345       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0620 18:01:03.609427       1 main.go:227] handling current node
	I0620 18:01:13.613522       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0620 18:01:13.613549       1 main.go:227] handling current node
	I0620 18:01:23.617786       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0620 18:01:23.617815       1 main.go:227] handling current node
	I0620 18:01:33.622349       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0620 18:01:33.622380       1 main.go:227] handling current node
	I0620 18:01:43.634751       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0620 18:01:43.634840       1 main.go:227] handling current node
	I0620 18:01:53.639577       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0620 18:01:53.639610       1 main.go:227] handling current node
	I0620 18:02:03.650467       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0620 18:02:03.650493       1 main.go:227] handling current node
	I0620 18:02:13.653923       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0620 18:02:13.653954       1 main.go:227] handling current node
	
	
	==> kube-apiserver [48eeef76de8465fdd9497219c604e2e2213272436bb608d8d6e26fd1de1714e4] <==
	I0620 18:01:29.016284       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0620 18:01:29.034915       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0620 18:01:29.034960       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0620 18:01:29.994730       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0620 18:01:30.037682       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0620 18:01:30.053617       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0620 18:01:34.731830       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0620 18:01:35.768944       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0620 18:01:47.092220       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0620 18:01:47.357096       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.25.149"}
	I0620 18:01:56.059445       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.98.177.65"}
	I0620 18:01:56.998888       1 handler.go:286] Adding GroupVersion batch.volcano.sh v1alpha1 to ResourceManager
	I0620 18:01:57.069162       1 handler.go:286] Adding GroupVersion bus.volcano.sh v1alpha1 to ResourceManager
	I0620 18:01:57.573954       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0620 18:01:57.626285       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0620 18:01:57.714995       1 handler.go:286] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
	I0620 18:01:57.721267       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	W0620 18:01:58.237950       1 cacher.go:168] Terminating all watchers from cacher commands.bus.volcano.sh
	W0620 18:01:58.722172       1 cacher.go:168] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0620 18:01:58.957568       1 cacher.go:168] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0620 18:01:58.957568       1 cacher.go:168] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0620 18:01:58.960277       1 cacher.go:168] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	I0620 18:02:10.928612       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	E0620 18:02:13.068349       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	E0620 18:02:15.081486       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [4ff37ab342bce1e550d8e49805d4fe6ac6579062c97835c149ed9aaecdd8732f] <==
	E0620 18:02:08.134900       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0620 18:02:08.470280       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0620 18:02:08.470315       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0620 18:02:08.918801       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0620 18:02:08.918854       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0620 18:02:08.954346       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0620 18:02:08.954410       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0620 18:02:09.233201       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0620 18:02:09.233241       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0620 18:02:12.213944       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0620 18:02:12.213979       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0620 18:02:12.995331       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create"
	I0620 18:02:13.000103       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-768f948f8f" duration="5.743µs"
	I0620 18:02:13.005650       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	I0620 18:02:15.344845       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="43.003µs"
	W0620 18:02:16.332234       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0620 18:02:16.332272       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0620 18:02:16.648489       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0620 18:02:16.648530       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0620 18:02:16.708737       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0620 18:02:16.708775       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0620 18:02:17.480566       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0620 18:02:17.480605       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0620 18:02:21.253364       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0620 18:02:21.253404       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [80eed4fc3e04b2e501d32a99626336bb1dcc1f324b0a9d2611f0c5602b7413cd] <==
	I0620 17:56:33.708371       1 server_linux.go:69] "Using iptables proxy"
	I0620 17:56:33.733947       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0620 17:56:33.795048       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0620 17:56:33.795098       1 server_linux.go:165] "Using iptables Proxier"
	I0620 17:56:33.800751       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0620 17:56:33.800779       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0620 17:56:33.800814       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0620 17:56:33.801141       1 server.go:872] "Version info" version="v1.30.2"
	I0620 17:56:33.801157       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0620 17:56:33.802240       1 config.go:192] "Starting service config controller"
	I0620 17:56:33.802256       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0620 17:56:33.802278       1 config.go:101] "Starting endpoint slice config controller"
	I0620 17:56:33.802282       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0620 17:56:33.802773       1 config.go:319] "Starting node config controller"
	I0620 17:56:33.802781       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0620 17:56:33.903129       1 shared_informer.go:320] Caches are synced for node config
	I0620 17:56:33.903198       1 shared_informer.go:320] Caches are synced for service config
	I0620 17:56:33.903228       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [79307dac4e2a2420a5932204ea8b209698d016a236b57aa71f93ef7f43afe3f0] <==
	W0620 17:56:15.940144       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0620 17:56:15.940167       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0620 17:56:15.943138       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0620 17:56:15.943176       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0620 17:56:15.943241       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0620 17:56:15.943261       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0620 17:56:15.943401       1 reflector.go:547] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0620 17:56:15.943505       1 reflector.go:150] runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0620 17:56:16.764475       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0620 17:56:16.764736       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0620 17:56:16.796668       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0620 17:56:16.797999       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0620 17:56:16.802643       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0620 17:56:16.804340       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0620 17:56:17.021213       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0620 17:56:17.021254       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0620 17:56:17.037495       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0620 17:56:17.037540       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0620 17:56:17.042254       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0620 17:56:17.042297       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0620 17:56:17.080911       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0620 17:56:17.081119       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0620 17:56:17.203939       1 reflector.go:547] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0620 17:56:17.204170       1 reflector.go:150] runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0620 17:56:19.519589       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 20 18:02:02 addons-527088 kubelet[1486]: E0620 18:02:02.387757    1486 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(6d0ce51f-1380-413e-828f-bf90177784f8)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="6d0ce51f-1380-413e-828f-bf90177784f8"
	Jun 20 18:02:12 addons-527088 kubelet[1486]: I0620 18:02:12.100835    1486 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hsvhx\" (UniqueName: \"kubernetes.io/projected/6d0ce51f-1380-413e-828f-bf90177784f8-kube-api-access-hsvhx\") pod \"6d0ce51f-1380-413e-828f-bf90177784f8\" (UID: \"6d0ce51f-1380-413e-828f-bf90177784f8\") "
	Jun 20 18:02:12 addons-527088 kubelet[1486]: I0620 18:02:12.105372    1486 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d0ce51f-1380-413e-828f-bf90177784f8-kube-api-access-hsvhx" (OuterVolumeSpecName: "kube-api-access-hsvhx") pod "6d0ce51f-1380-413e-828f-bf90177784f8" (UID: "6d0ce51f-1380-413e-828f-bf90177784f8"). InnerVolumeSpecName "kube-api-access-hsvhx". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jun 20 18:02:12 addons-527088 kubelet[1486]: I0620 18:02:12.201550    1486 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-hsvhx\" (UniqueName: \"kubernetes.io/projected/6d0ce51f-1380-413e-828f-bf90177784f8-kube-api-access-hsvhx\") on node \"addons-527088\" DevicePath \"\""
	Jun 20 18:02:12 addons-527088 kubelet[1486]: I0620 18:02:12.318831    1486 scope.go:117] "RemoveContainer" containerID="a0ece36742a7c3c86274208839b427f64531621b730e63354f7ac3bd9f75c56f"
	Jun 20 18:02:12 addons-527088 kubelet[1486]: I0620 18:02:12.389970    1486 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d0ce51f-1380-413e-828f-bf90177784f8" path="/var/lib/kubelet/pods/6d0ce51f-1380-413e-828f-bf90177784f8/volumes"
	Jun 20 18:02:14 addons-527088 kubelet[1486]: I0620 18:02:14.386959    1486 scope.go:117] "RemoveContainer" containerID="179e0a848b824766817167b10fcb76159ec2eb8a02a687ac0b9422d07ceaeb33"
	Jun 20 18:02:14 addons-527088 kubelet[1486]: I0620 18:02:14.392307    1486 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22222a7c-51c2-43cd-a647-55db17e77d80" path="/var/lib/kubelet/pods/22222a7c-51c2-43cd-a647-55db17e77d80/volumes"
	Jun 20 18:02:14 addons-527088 kubelet[1486]: I0620 18:02:14.394288    1486 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4694c862-b85d-4645-aea3-99cfb2dc6e10" path="/var/lib/kubelet/pods/4694c862-b85d-4645-aea3-99cfb2dc6e10/volumes"
	Jun 20 18:02:15 addons-527088 kubelet[1486]: I0620 18:02:15.328792    1486 scope.go:117] "RemoveContainer" containerID="179e0a848b824766817167b10fcb76159ec2eb8a02a687ac0b9422d07ceaeb33"
	Jun 20 18:02:15 addons-527088 kubelet[1486]: I0620 18:02:15.329173    1486 scope.go:117] "RemoveContainer" containerID="296f80b16e7ab051a73e95f68d3b5f8241351b8a021cfa7779a4a39db3de046b"
	Jun 20 18:02:15 addons-527088 kubelet[1486]: E0620 18:02:15.329470    1486 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 20s restarting failed container=hello-world-app pod=hello-world-app-86c47465fc-jmkgs_default(3605425c-545a-49c9-a567-04b6143c5330)\"" pod="default/hello-world-app-86c47465fc-jmkgs" podUID="3605425c-545a-49c9-a567-04b6143c5330"
	Jun 20 18:02:16 addons-527088 kubelet[1486]: I0620 18:02:16.328152    1486 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bf4p\" (UniqueName: \"kubernetes.io/projected/a5adbb26-8e99-4251-b616-8dbebcd5cb16-kube-api-access-6bf4p\") pod \"a5adbb26-8e99-4251-b616-8dbebcd5cb16\" (UID: \"a5adbb26-8e99-4251-b616-8dbebcd5cb16\") "
	Jun 20 18:02:16 addons-527088 kubelet[1486]: I0620 18:02:16.328273    1486 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a5adbb26-8e99-4251-b616-8dbebcd5cb16-webhook-cert\") pod \"a5adbb26-8e99-4251-b616-8dbebcd5cb16\" (UID: \"a5adbb26-8e99-4251-b616-8dbebcd5cb16\") "
	Jun 20 18:02:16 addons-527088 kubelet[1486]: I0620 18:02:16.331066    1486 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5adbb26-8e99-4251-b616-8dbebcd5cb16-kube-api-access-6bf4p" (OuterVolumeSpecName: "kube-api-access-6bf4p") pod "a5adbb26-8e99-4251-b616-8dbebcd5cb16" (UID: "a5adbb26-8e99-4251-b616-8dbebcd5cb16"). InnerVolumeSpecName "kube-api-access-6bf4p". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jun 20 18:02:16 addons-527088 kubelet[1486]: I0620 18:02:16.334208    1486 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5adbb26-8e99-4251-b616-8dbebcd5cb16-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "a5adbb26-8e99-4251-b616-8dbebcd5cb16" (UID: "a5adbb26-8e99-4251-b616-8dbebcd5cb16"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jun 20 18:02:16 addons-527088 kubelet[1486]: I0620 18:02:16.337208    1486 scope.go:117] "RemoveContainer" containerID="88b4c66f41565e1b52d6caac436a83ff69c7bccd87b504d2b0e5e21326473cb9"
	Jun 20 18:02:16 addons-527088 kubelet[1486]: I0620 18:02:16.351347    1486 scope.go:117] "RemoveContainer" containerID="88b4c66f41565e1b52d6caac436a83ff69c7bccd87b504d2b0e5e21326473cb9"
	Jun 20 18:02:16 addons-527088 kubelet[1486]: E0620 18:02:16.352660    1486 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"88b4c66f41565e1b52d6caac436a83ff69c7bccd87b504d2b0e5e21326473cb9\": not found" containerID="88b4c66f41565e1b52d6caac436a83ff69c7bccd87b504d2b0e5e21326473cb9"
	Jun 20 18:02:16 addons-527088 kubelet[1486]: I0620 18:02:16.352702    1486 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"88b4c66f41565e1b52d6caac436a83ff69c7bccd87b504d2b0e5e21326473cb9"} err="failed to get container status \"88b4c66f41565e1b52d6caac436a83ff69c7bccd87b504d2b0e5e21326473cb9\": rpc error: code = NotFound desc = an error occurred when try to find container \"88b4c66f41565e1b52d6caac436a83ff69c7bccd87b504d2b0e5e21326473cb9\": not found"
	Jun 20 18:02:16 addons-527088 kubelet[1486]: I0620 18:02:16.389643    1486 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5adbb26-8e99-4251-b616-8dbebcd5cb16" path="/var/lib/kubelet/pods/a5adbb26-8e99-4251-b616-8dbebcd5cb16/volumes"
	Jun 20 18:02:16 addons-527088 kubelet[1486]: I0620 18:02:16.428573    1486 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-6bf4p\" (UniqueName: \"kubernetes.io/projected/a5adbb26-8e99-4251-b616-8dbebcd5cb16-kube-api-access-6bf4p\") on node \"addons-527088\" DevicePath \"\""
	Jun 20 18:02:16 addons-527088 kubelet[1486]: I0620 18:02:16.428625    1486 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a5adbb26-8e99-4251-b616-8dbebcd5cb16-webhook-cert\") on node \"addons-527088\" DevicePath \"\""
	Jun 20 18:02:18 addons-527088 kubelet[1486]: I0620 18:02:18.856240    1486 scope.go:117] "RemoveContainer" containerID="b54d21b716f59a4e2142ea7ff26fb12258cd6ec8b41ae6f1910097b80115ad94"
	Jun 20 18:02:18 addons-527088 kubelet[1486]: I0620 18:02:18.866411    1486 scope.go:117] "RemoveContainer" containerID="da912eff562cb64cf35257ca9f8d89788b920ab7bcbdf28479f00f7db6ea2642"
	
	
	==> storage-provisioner [94c4e30569abc2362d1e1faed45576571607c5f9ff7e7c26f7284e78c77661df] <==
	I0620 17:56:37.166529       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0620 17:56:37.217236       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0620 17:56:37.217325       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0620 17:56:37.226887       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0620 17:56:37.227067       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-527088_bf015a39-5e2c-4b21-b810-798446b50fec!
	I0620 17:56:37.227967       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"dcf6a55f-f6c4-4790-aa53-c78ce8d520ac", APIVersion:"v1", ResourceVersion:"499", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-527088_bf015a39-5e2c-4b21-b810-798446b50fec became leader
	I0620 17:56:37.327382       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-527088_bf015a39-5e2c-4b21-b810-798446b50fec!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-527088 -n addons-527088
helpers_test.go:261: (dbg) Run:  kubectl --context addons-527088 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (35.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 image load --daemon gcr.io/google-containers/addon-resizer:functional-979723 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-979723 image load --daemon gcr.io/google-containers/addon-resizer:functional-979723 --alsologtostderr: (4.255282437s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-979723" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 image load --daemon gcr.io/google-containers/addon-resizer:functional-979723 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-979723 image load --daemon gcr.io/google-containers/addon-resizer:functional-979723 --alsologtostderr: (3.198566336s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-979723" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.605820698s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-979723
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 image load --daemon gcr.io/google-containers/addon-resizer:functional-979723 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-979723 image load --daemon gcr.io/google-containers/addon-resizer:functional-979723 --alsologtostderr: (3.247421797s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-979723" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 image save gcr.io/google-containers/addon-resizer:functional-979723 /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:385: expected "/home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:410: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I0620 18:09:02.998449  314924 out.go:291] Setting OutFile to fd 1 ...
	I0620 18:09:02.999371  314924 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0620 18:09:02.999410  314924 out.go:304] Setting ErrFile to fd 2...
	I0620 18:09:02.999423  314924 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0620 18:09:02.999674  314924 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19106-274269/.minikube/bin
	I0620 18:09:03.000352  314924 config.go:182] Loaded profile config "functional-979723": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.2
	I0620 18:09:03.000522  314924 config.go:182] Loaded profile config "functional-979723": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.2
	I0620 18:09:03.001045  314924 cli_runner.go:164] Run: docker container inspect functional-979723 --format={{.State.Status}}
	I0620 18:09:03.019478  314924 ssh_runner.go:195] Run: systemctl --version
	I0620 18:09:03.019622  314924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-979723
	I0620 18:09:03.037004  314924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/19106-274269/.minikube/machines/functional-979723/id_rsa Username:docker}
	I0620 18:09:03.131555  314924 cache_images.go:286] Loading image from: /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar
	W0620 18:09:03.131624  314924 cache_images.go:254] Failed to load cached images for profile functional-979723. make sure the profile is running. loading images: stat /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar: no such file or directory
	I0620 18:09:03.131650  314924 cache_images.go:262] succeeded pushing to: 
	I0620 18:09:03.131656  314924 cache_images.go:263] failed pushing to: functional-979723

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (372.61s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-337794 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0620 18:46:09.190608  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/functional-979723/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-337794 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 80 (6m9.258014637s)

                                                
                                                
-- stdout --
	* [old-k8s-version-337794] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19106
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19106-274269/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19106-274269/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-337794" primary control-plane node in "old-k8s-version-337794" cluster
	* Pulling base image v0.0.44-1718753665-19106 ...
	* Restarting existing docker container for "old-k8s-version-337794" ...
	* Preparing Kubernetes v1.20.0 on containerd 1.6.33 ...
	* Verifying Kubernetes components...
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-337794 addons enable metrics-server
	
	* Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0620 18:45:18.437040  474995 out.go:291] Setting OutFile to fd 1 ...
	I0620 18:45:18.437188  474995 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0620 18:45:18.437193  474995 out.go:304] Setting ErrFile to fd 2...
	I0620 18:45:18.437197  474995 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0620 18:45:18.437448  474995 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19106-274269/.minikube/bin
	I0620 18:45:18.437801  474995 out.go:298] Setting JSON to false
	I0620 18:45:18.438865  474995 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8869,"bootTime":1718900250,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1063-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0620 18:45:18.438934  474995 start.go:139] virtualization:  
	I0620 18:45:18.441867  474995 out.go:177] * [old-k8s-version-337794] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0620 18:45:18.444851  474995 out.go:177]   - MINIKUBE_LOCATION=19106
	I0620 18:45:18.444923  474995 notify.go:220] Checking for updates...
	I0620 18:45:18.450555  474995 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0620 18:45:18.452735  474995 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19106-274269/kubeconfig
	I0620 18:45:18.454811  474995 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19106-274269/.minikube
	I0620 18:45:18.457256  474995 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0620 18:45:18.459834  474995 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0620 18:45:18.462401  474995 config.go:182] Loaded profile config "old-k8s-version-337794": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0620 18:45:18.465258  474995 out.go:177] * Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
	I0620 18:45:18.467509  474995 driver.go:392] Setting default libvirt URI to qemu:///system
	I0620 18:45:18.503462  474995 docker.go:122] docker version: linux-26.1.4:Docker Engine - Community
	I0620 18:45:18.503597  474995 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0620 18:45:18.612715  474995 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:67 SystemTime:2024-06-20 18:45:18.600842937 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1063-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214892544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d2d58213f83a351ca8f528a95fbd145f5654e957 Expected:d2d58213f83a351ca8f528a95fbd145f5654e957} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.1]] Warnings:<nil>}}
	I0620 18:45:18.612822  474995 docker.go:295] overlay module found
	I0620 18:45:18.615179  474995 out.go:177] * Using the docker driver based on existing profile
	I0620 18:45:18.616828  474995 start.go:297] selected driver: docker
	I0620 18:45:18.616845  474995 start.go:901] validating driver "docker" against &{Name:old-k8s-version-337794 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718753665-19106@sha256:735aacbd61d487240dc39ba6e4d70dd6ae1ad6181ca2ba092d372605e48ee636 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-337794 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0620 18:45:18.616949  474995 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0620 18:45:18.617583  474995 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0620 18:45:18.710723  474995 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:67 SystemTime:2024-06-20 18:45:18.701213114 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1063-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214892544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d2d58213f83a351ca8f528a95fbd145f5654e957 Expected:d2d58213f83a351ca8f528a95fbd145f5654e957} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.1]] Warnings:<nil>}}
	I0620 18:45:18.711156  474995 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0620 18:45:18.711185  474995 cni.go:84] Creating CNI manager for ""
	I0620 18:45:18.711199  474995 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0620 18:45:18.711246  474995 start.go:340] cluster config:
	{Name:old-k8s-version-337794 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718753665-19106@sha256:735aacbd61d487240dc39ba6e4d70dd6ae1ad6181ca2ba092d372605e48ee636 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-337794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0620 18:45:18.713735  474995 out.go:177] * Starting "old-k8s-version-337794" primary control-plane node in "old-k8s-version-337794" cluster
	I0620 18:45:18.716742  474995 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0620 18:45:18.720354  474995 out.go:177] * Pulling base image v0.0.44-1718753665-19106 ...
	I0620 18:45:18.722760  474995 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0620 18:45:18.723114  474995 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718753665-19106@sha256:735aacbd61d487240dc39ba6e4d70dd6ae1ad6181ca2ba092d372605e48ee636 in local docker daemon
	I0620 18:45:18.724484  474995 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19106-274269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0620 18:45:18.724507  474995 cache.go:56] Caching tarball of preloaded images
	I0620 18:45:18.725066  474995 preload.go:173] Found /home/jenkins/minikube-integration/19106-274269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0620 18:45:18.725086  474995 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0620 18:45:18.725679  474995 profile.go:143] Saving config to /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/old-k8s-version-337794/config.json ...
	I0620 18:45:18.792635  474995 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718753665-19106@sha256:735aacbd61d487240dc39ba6e4d70dd6ae1ad6181ca2ba092d372605e48ee636 in local docker daemon, skipping pull
	I0620 18:45:18.792666  474995 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718753665-19106@sha256:735aacbd61d487240dc39ba6e4d70dd6ae1ad6181ca2ba092d372605e48ee636 exists in daemon, skipping load
	I0620 18:45:18.792680  474995 cache.go:194] Successfully downloaded all kic artifacts
	I0620 18:45:18.792719  474995 start.go:360] acquireMachinesLock for old-k8s-version-337794: {Name:mk0ccb636847a3681a93fff3d9f51b27a5778702 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0620 18:45:18.792788  474995 start.go:364] duration metric: took 43.265µs to acquireMachinesLock for "old-k8s-version-337794"
	I0620 18:45:18.792810  474995 start.go:96] Skipping create...Using existing machine configuration
	I0620 18:45:18.792819  474995 fix.go:54] fixHost starting: 
	I0620 18:45:18.793095  474995 cli_runner.go:164] Run: docker container inspect old-k8s-version-337794 --format={{.State.Status}}
	I0620 18:45:18.874955  474995 fix.go:112] recreateIfNeeded on old-k8s-version-337794: state=Stopped err=<nil>
	W0620 18:45:18.874986  474995 fix.go:138] unexpected machine state, will restart: <nil>
	I0620 18:45:18.877656  474995 out.go:177] * Restarting existing docker container for "old-k8s-version-337794" ...
	I0620 18:45:18.880280  474995 cli_runner.go:164] Run: docker start old-k8s-version-337794
	I0620 18:45:19.312157  474995 cli_runner.go:164] Run: docker container inspect old-k8s-version-337794 --format={{.State.Status}}
	I0620 18:45:19.348695  474995 kic.go:430] container "old-k8s-version-337794" state is running.
	I0620 18:45:19.349168  474995 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-337794
	I0620 18:45:19.379711  474995 profile.go:143] Saving config to /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/old-k8s-version-337794/config.json ...
	I0620 18:45:19.380483  474995 machine.go:94] provisionDockerMachine start ...
	I0620 18:45:19.380797  474995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-337794
	I0620 18:45:19.407141  474995 main.go:141] libmachine: Using SSH client type: native
	I0620 18:45:19.409803  474995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2bc0] 0x3e5420 <nil>  [] 0s} 127.0.0.1 33440 <nil> <nil>}
	I0620 18:45:19.409837  474995 main.go:141] libmachine: About to run SSH command:
	hostname
	I0620 18:45:19.413248  474995 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0620 18:45:22.551106  474995 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-337794
	
	I0620 18:45:22.551135  474995 ubuntu.go:169] provisioning hostname "old-k8s-version-337794"
	I0620 18:45:22.551261  474995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-337794
	I0620 18:45:22.579025  474995 main.go:141] libmachine: Using SSH client type: native
	I0620 18:45:22.579282  474995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2bc0] 0x3e5420 <nil>  [] 0s} 127.0.0.1 33440 <nil> <nil>}
	I0620 18:45:22.579300  474995 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-337794 && echo "old-k8s-version-337794" | sudo tee /etc/hostname
	I0620 18:45:22.732352  474995 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-337794
	
	I0620 18:45:22.732495  474995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-337794
	I0620 18:45:22.756623  474995 main.go:141] libmachine: Using SSH client type: native
	I0620 18:45:22.756875  474995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2bc0] 0x3e5420 <nil>  [] 0s} 127.0.0.1 33440 <nil> <nil>}
	I0620 18:45:22.756892  474995 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-337794' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-337794/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-337794' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0620 18:45:22.891459  474995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0620 18:45:22.891528  474995 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19106-274269/.minikube CaCertPath:/home/jenkins/minikube-integration/19106-274269/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19106-274269/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19106-274269/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19106-274269/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19106-274269/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19106-274269/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19106-274269/.minikube}
	I0620 18:45:22.891584  474995 ubuntu.go:177] setting up certificates
	I0620 18:45:22.891598  474995 provision.go:84] configureAuth start
	I0620 18:45:22.891677  474995 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-337794
	I0620 18:45:22.920535  474995 provision.go:143] copyHostCerts
	I0620 18:45:22.920620  474995 exec_runner.go:144] found /home/jenkins/minikube-integration/19106-274269/.minikube/ca.pem, removing ...
	I0620 18:45:22.920634  474995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19106-274269/.minikube/ca.pem
	I0620 18:45:22.920712  474995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19106-274269/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19106-274269/.minikube/ca.pem (1082 bytes)
	I0620 18:45:22.920826  474995 exec_runner.go:144] found /home/jenkins/minikube-integration/19106-274269/.minikube/cert.pem, removing ...
	I0620 18:45:22.920835  474995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19106-274269/.minikube/cert.pem
	I0620 18:45:22.920865  474995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19106-274269/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19106-274269/.minikube/cert.pem (1123 bytes)
	I0620 18:45:22.920934  474995 exec_runner.go:144] found /home/jenkins/minikube-integration/19106-274269/.minikube/key.pem, removing ...
	I0620 18:45:22.920945  474995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19106-274269/.minikube/key.pem
	I0620 18:45:22.920973  474995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19106-274269/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19106-274269/.minikube/key.pem (1679 bytes)
	I0620 18:45:22.921040  474995 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19106-274269/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19106-274269/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19106-274269/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-337794 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-337794]
	I0620 18:45:23.284157  474995 provision.go:177] copyRemoteCerts
	I0620 18:45:23.284306  474995 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0620 18:45:23.284378  474995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-337794
	I0620 18:45:23.300922  474995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/19106-274269/.minikube/machines/old-k8s-version-337794/id_rsa Username:docker}
	I0620 18:45:23.400575  474995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19106-274269/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0620 18:45:23.430366  474995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19106-274269/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0620 18:45:23.460041  474995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19106-274269/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0620 18:45:23.485942  474995 provision.go:87] duration metric: took 594.32653ms to configureAuth
	I0620 18:45:23.486016  474995 ubuntu.go:193] setting minikube options for container-runtime
	I0620 18:45:23.486252  474995 config.go:182] Loaded profile config "old-k8s-version-337794": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0620 18:45:23.486286  474995 machine.go:97] duration metric: took 4.105788263s to provisionDockerMachine
	I0620 18:45:23.486309  474995 start.go:293] postStartSetup for "old-k8s-version-337794" (driver="docker")
	I0620 18:45:23.486336  474995 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0620 18:45:23.486418  474995 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0620 18:45:23.486500  474995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-337794
	I0620 18:45:23.505500  474995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/19106-274269/.minikube/machines/old-k8s-version-337794/id_rsa Username:docker}
	I0620 18:45:23.604963  474995 ssh_runner.go:195] Run: cat /etc/os-release
	I0620 18:45:23.608660  474995 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0620 18:45:23.608691  474995 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0620 18:45:23.608702  474995 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0620 18:45:23.608709  474995 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0620 18:45:23.608719  474995 filesync.go:126] Scanning /home/jenkins/minikube-integration/19106-274269/.minikube/addons for local assets ...
	I0620 18:45:23.608771  474995 filesync.go:126] Scanning /home/jenkins/minikube-integration/19106-274269/.minikube/files for local assets ...
	I0620 18:45:23.608844  474995 filesync.go:149] local asset: /home/jenkins/minikube-integration/19106-274269/.minikube/files/etc/ssl/certs/2796712.pem -> 2796712.pem in /etc/ssl/certs
	I0620 18:45:23.608940  474995 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0620 18:45:23.618456  474995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19106-274269/.minikube/files/etc/ssl/certs/2796712.pem --> /etc/ssl/certs/2796712.pem (1708 bytes)
	I0620 18:45:23.644756  474995 start.go:296] duration metric: took 158.417821ms for postStartSetup
	I0620 18:45:23.644833  474995 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0620 18:45:23.644873  474995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-337794
	I0620 18:45:23.663384  474995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/19106-274269/.minikube/machines/old-k8s-version-337794/id_rsa Username:docker}
	I0620 18:45:23.756276  474995 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0620 18:45:23.760892  474995 fix.go:56] duration metric: took 4.968065324s for fixHost
	I0620 18:45:23.760915  474995 start.go:83] releasing machines lock for "old-k8s-version-337794", held for 4.968116072s
	I0620 18:45:23.760979  474995 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-337794
	I0620 18:45:23.777609  474995 ssh_runner.go:195] Run: cat /version.json
	I0620 18:45:23.777645  474995 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0620 18:45:23.777661  474995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-337794
	I0620 18:45:23.777697  474995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-337794
	I0620 18:45:23.807770  474995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/19106-274269/.minikube/machines/old-k8s-version-337794/id_rsa Username:docker}
	I0620 18:45:23.809688  474995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/19106-274269/.minikube/machines/old-k8s-version-337794/id_rsa Username:docker}
	I0620 18:45:23.915647  474995 ssh_runner.go:195] Run: systemctl --version
	I0620 18:45:24.059022  474995 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0620 18:45:24.063733  474995 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0620 18:45:24.083290  474995 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0620 18:45:24.083380  474995 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0620 18:45:24.093468  474995 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0620 18:45:24.093495  474995 start.go:494] detecting cgroup driver to use...
	I0620 18:45:24.093529  474995 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0620 18:45:24.093584  474995 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0620 18:45:24.109373  474995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0620 18:45:24.123258  474995 docker.go:217] disabling cri-docker service (if available) ...
	I0620 18:45:24.123320  474995 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0620 18:45:24.146443  474995 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0620 18:45:24.164688  474995 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0620 18:45:24.291590  474995 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0620 18:45:24.395659  474995 docker.go:233] disabling docker service ...
	I0620 18:45:24.395733  474995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0620 18:45:24.410036  474995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0620 18:45:24.422828  474995 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0620 18:45:24.530273  474995 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0620 18:45:24.634593  474995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0620 18:45:24.648644  474995 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0620 18:45:24.666972  474995 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0620 18:45:24.677632  474995 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0620 18:45:24.688194  474995 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0620 18:45:24.688314  474995 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0620 18:45:24.698793  474995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0620 18:45:24.708986  474995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0620 18:45:24.719394  474995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0620 18:45:24.729414  474995 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0620 18:45:24.738935  474995 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0620 18:45:24.749006  474995 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0620 18:45:24.758138  474995 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0620 18:45:24.767104  474995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0620 18:45:24.873816  474995 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0620 18:45:25.107063  474995 start.go:541] Will wait 60s for socket path /run/containerd/containerd.sock
	I0620 18:45:25.107223  474995 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0620 18:45:25.111510  474995 start.go:562] Will wait 60s for crictl version
	I0620 18:45:25.111575  474995 ssh_runner.go:195] Run: which crictl
	I0620 18:45:25.115509  474995 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0620 18:45:25.174439  474995 start.go:578] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.33
	RuntimeApiVersion:  v1
	I0620 18:45:25.174525  474995 ssh_runner.go:195] Run: containerd --version
	I0620 18:45:25.201209  474995 ssh_runner.go:195] Run: containerd --version
	I0620 18:45:25.225041  474995 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.6.33 ...
	I0620 18:45:25.227956  474995 cli_runner.go:164] Run: docker network inspect old-k8s-version-337794 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0620 18:45:25.246489  474995 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0620 18:45:25.250339  474995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0620 18:45:25.265900  474995 kubeadm.go:877] updating cluster {Name:old-k8s-version-337794 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718753665-19106@sha256:735aacbd61d487240dc39ba6e4d70dd6ae1ad6181ca2ba092d372605e48ee636 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-337794 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0620 18:45:25.266021  474995 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0620 18:45:25.266077  474995 ssh_runner.go:195] Run: sudo crictl images --output json
	I0620 18:45:25.314915  474995 containerd.go:627] all images are preloaded for containerd runtime.
	I0620 18:45:25.314941  474995 containerd.go:534] Images already preloaded, skipping extraction
	I0620 18:45:25.315030  474995 ssh_runner.go:195] Run: sudo crictl images --output json
	I0620 18:45:25.409657  474995 containerd.go:627] all images are preloaded for containerd runtime.
	I0620 18:45:25.409693  474995 cache_images.go:84] Images are preloaded, skipping loading
	I0620 18:45:25.409701  474995 kubeadm.go:928] updating node { 192.168.85.2 8443 v1.20.0 containerd true true} ...
	I0620 18:45:25.409852  474995 kubeadm.go:940] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-337794 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-337794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0620 18:45:25.409935  474995 ssh_runner.go:195] Run: sudo crictl info
	I0620 18:45:25.471535  474995 cni.go:84] Creating CNI manager for ""
	I0620 18:45:25.471575  474995 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0620 18:45:25.471591  474995 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0620 18:45:25.471628  474995 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-337794 NodeName:old-k8s-version-337794 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0620 18:45:25.471808  474995 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-337794"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0620 18:45:25.471900  474995 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0620 18:45:25.483694  474995 binaries.go:44] Found k8s binaries, skipping transfer
	I0620 18:45:25.483780  474995 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0620 18:45:25.494629  474995 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
	I0620 18:45:25.519542  474995 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0620 18:45:25.542514  474995 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
	I0620 18:45:25.565419  474995 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0620 18:45:25.570423  474995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0620 18:45:25.583225  474995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0620 18:45:25.719629  474995 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0620 18:45:25.735193  474995 certs.go:68] Setting up /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/old-k8s-version-337794 for IP: 192.168.85.2
	I0620 18:45:25.735216  474995 certs.go:194] generating shared ca certs ...
	I0620 18:45:25.735238  474995 certs.go:226] acquiring lock for ca certs: {Name:mk8b11ba3bc5463026cd3822a512e17542776a35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0620 18:45:25.735387  474995 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19106-274269/.minikube/ca.key
	I0620 18:45:25.735446  474995 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19106-274269/.minikube/proxy-client-ca.key
	I0620 18:45:25.735458  474995 certs.go:256] generating profile certs ...
	I0620 18:45:25.735563  474995 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/old-k8s-version-337794/client.key
	I0620 18:45:25.735651  474995 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/old-k8s-version-337794/apiserver.key.b11c3d2b
	I0620 18:45:25.735698  474995 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/old-k8s-version-337794/proxy-client.key
	I0620 18:45:25.735839  474995 certs.go:484] found cert: /home/jenkins/minikube-integration/19106-274269/.minikube/certs/279671.pem (1338 bytes)
	W0620 18:45:25.735875  474995 certs.go:480] ignoring /home/jenkins/minikube-integration/19106-274269/.minikube/certs/279671_empty.pem, impossibly tiny 0 bytes
	I0620 18:45:25.735890  474995 certs.go:484] found cert: /home/jenkins/minikube-integration/19106-274269/.minikube/certs/ca-key.pem (1679 bytes)
	I0620 18:45:25.735916  474995 certs.go:484] found cert: /home/jenkins/minikube-integration/19106-274269/.minikube/certs/ca.pem (1082 bytes)
	I0620 18:45:25.735943  474995 certs.go:484] found cert: /home/jenkins/minikube-integration/19106-274269/.minikube/certs/cert.pem (1123 bytes)
	I0620 18:45:25.735975  474995 certs.go:484] found cert: /home/jenkins/minikube-integration/19106-274269/.minikube/certs/key.pem (1679 bytes)
	I0620 18:45:25.736031  474995 certs.go:484] found cert: /home/jenkins/minikube-integration/19106-274269/.minikube/files/etc/ssl/certs/2796712.pem (1708 bytes)
	I0620 18:45:25.736728  474995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19106-274269/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0620 18:45:25.813445  474995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19106-274269/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0620 18:45:25.876821  474995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19106-274269/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0620 18:45:25.948167  474995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19106-274269/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0620 18:45:25.976903  474995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/old-k8s-version-337794/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0620 18:45:26.004615  474995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/old-k8s-version-337794/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0620 18:45:26.034398  474995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/old-k8s-version-337794/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0620 18:45:26.061543  474995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/old-k8s-version-337794/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0620 18:45:26.088862  474995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19106-274269/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0620 18:45:26.115902  474995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19106-274269/.minikube/certs/279671.pem --> /usr/share/ca-certificates/279671.pem (1338 bytes)
	I0620 18:45:26.143210  474995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19106-274269/.minikube/files/etc/ssl/certs/2796712.pem --> /usr/share/ca-certificates/2796712.pem (1708 bytes)
	I0620 18:45:26.170396  474995 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0620 18:45:26.190523  474995 ssh_runner.go:195] Run: openssl version
	I0620 18:45:26.196659  474995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0620 18:45:26.207551  474995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0620 18:45:26.211735  474995 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 20 17:56 /usr/share/ca-certificates/minikubeCA.pem
	I0620 18:45:26.211851  474995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0620 18:45:26.219487  474995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0620 18:45:26.230373  474995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/279671.pem && ln -fs /usr/share/ca-certificates/279671.pem /etc/ssl/certs/279671.pem"
	I0620 18:45:26.240676  474995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/279671.pem
	I0620 18:45:26.244640  474995 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 20 18:05 /usr/share/ca-certificates/279671.pem
	I0620 18:45:26.244757  474995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/279671.pem
	I0620 18:45:26.252136  474995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/279671.pem /etc/ssl/certs/51391683.0"
	I0620 18:45:26.261821  474995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2796712.pem && ln -fs /usr/share/ca-certificates/2796712.pem /etc/ssl/certs/2796712.pem"
	I0620 18:45:26.271781  474995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2796712.pem
	I0620 18:45:26.275574  474995 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 20 18:05 /usr/share/ca-certificates/2796712.pem
	I0620 18:45:26.275657  474995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2796712.pem
	I0620 18:45:26.282868  474995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2796712.pem /etc/ssl/certs/3ec20f2e.0"
	I0620 18:45:26.292489  474995 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0620 18:45:26.296428  474995 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0620 18:45:26.303338  474995 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0620 18:45:26.310139  474995 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0620 18:45:26.317293  474995 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0620 18:45:26.324370  474995 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0620 18:45:26.331386  474995 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0620 18:45:26.338239  474995 kubeadm.go:391] StartCluster: {Name:old-k8s-version-337794 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718753665-19106@sha256:735aacbd61d487240dc39ba6e4d70dd6ae1ad6181ca2ba092d372605e48ee636 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-337794 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0620 18:45:26.338354  474995 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0620 18:45:26.338424  474995 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0620 18:45:26.383160  474995 cri.go:89] found id: "2361035748bd059e7b4f11b7715ec3ccdb81adbf542d803180ebb8bec389d445"
	I0620 18:45:26.383207  474995 cri.go:89] found id: "41ba1c27ef09f4269353d6500282257c6dc0ee5cc4906cc9d3f99fdacb2835c1"
	I0620 18:45:26.383214  474995 cri.go:89] found id: "13a0bb253788862ae22e6058b7f190fba11dd8db1d86a9df13f8b2aeb42fcecb"
	I0620 18:45:26.383218  474995 cri.go:89] found id: "0084076706589b3159aa05b37a0ef12c102644140788b191724b0f7267fd4e3c"
	I0620 18:45:26.383222  474995 cri.go:89] found id: "bd7d04cc7ceff18bbe3fc6366ba657c6a00df2f8696317b9d1cf5695b156ee3a"
	I0620 18:45:26.383227  474995 cri.go:89] found id: "e895300ffd7e6f784d9567baacfb9b233853b31245990e391675289e62ad7659"
	I0620 18:45:26.383230  474995 cri.go:89] found id: "78a0fe9b19212a5f753e82266fc6eda58535c93f7b45a9f7ae1027d529ba1b9f"
	I0620 18:45:26.383233  474995 cri.go:89] found id: "cfdcf7bafc98db0ed8b117a7f81af17f4aa0ffe0d900af8fb469323b24ada207"
	I0620 18:45:26.383236  474995 cri.go:89] found id: "6ce85447f9024fd431a26a603f1a93ee2dd0f45da6feddb6ce6fee810b5fdd37"
	I0620 18:45:26.383252  474995 cri.go:89] found id: ""
	I0620 18:45:26.383319  474995 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0620 18:45:26.396165  474995 cri.go:116] JSON = null
	W0620 18:45:26.396233  474995 kubeadm.go:398] unpause failed: list paused: list returned 0 containers, but ps returned 9
	I0620 18:45:26.396307  474995 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0620 18:45:26.405718  474995 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0620 18:45:26.405742  474995 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0620 18:45:26.405749  474995 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0620 18:45:26.405805  474995 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0620 18:45:26.414705  474995 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0620 18:45:26.415254  474995 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-337794" does not appear in /home/jenkins/minikube-integration/19106-274269/kubeconfig
	I0620 18:45:26.415379  474995 kubeconfig.go:62] /home/jenkins/minikube-integration/19106-274269/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-337794" cluster setting kubeconfig missing "old-k8s-version-337794" context setting]
	I0620 18:45:26.415698  474995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19106-274269/kubeconfig: {Name:mke344b955a4582ad77895759c31c36670e563b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0620 18:45:26.417510  474995 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0620 18:45:26.428106  474995 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.85.2
	I0620 18:45:26.428152  474995 kubeadm.go:591] duration metric: took 22.388158ms to restartPrimaryControlPlane
	I0620 18:45:26.428166  474995 kubeadm.go:393] duration metric: took 89.93828ms to StartCluster
	I0620 18:45:26.428188  474995 settings.go:142] acquiring lock: {Name:mk5a1a69c9e50173b6bfe88004ea354d3f5ed8f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0620 18:45:26.428252  474995 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19106-274269/kubeconfig
	I0620 18:45:26.429017  474995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19106-274269/kubeconfig: {Name:mke344b955a4582ad77895759c31c36670e563b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0620 18:45:26.429247  474995 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0620 18:45:26.429661  474995 config.go:182] Loaded profile config "old-k8s-version-337794": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0620 18:45:26.429638  474995 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0620 18:45:26.429739  474995 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-337794"
	I0620 18:45:26.429761  474995 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-337794"
	W0620 18:45:26.429768  474995 addons.go:243] addon storage-provisioner should already be in state true
	I0620 18:45:26.429795  474995 host.go:66] Checking if "old-k8s-version-337794" exists ...
	I0620 18:45:26.429972  474995 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-337794"
	I0620 18:45:26.430011  474995 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-337794"
	I0620 18:45:26.430258  474995 cli_runner.go:164] Run: docker container inspect old-k8s-version-337794 --format={{.State.Status}}
	I0620 18:45:26.430274  474995 cli_runner.go:164] Run: docker container inspect old-k8s-version-337794 --format={{.State.Status}}
	I0620 18:45:26.430642  474995 addons.go:69] Setting dashboard=true in profile "old-k8s-version-337794"
	I0620 18:45:26.430688  474995 addons.go:234] Setting addon dashboard=true in "old-k8s-version-337794"
	W0620 18:45:26.430744  474995 addons.go:243] addon dashboard should already be in state true
	I0620 18:45:26.430767  474995 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-337794"
	I0620 18:45:26.430797  474995 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-337794"
	W0620 18:45:26.430808  474995 addons.go:243] addon metrics-server should already be in state true
	I0620 18:45:26.430831  474995 host.go:66] Checking if "old-k8s-version-337794" exists ...
	I0620 18:45:26.430833  474995 host.go:66] Checking if "old-k8s-version-337794" exists ...
	I0620 18:45:26.431316  474995 cli_runner.go:164] Run: docker container inspect old-k8s-version-337794 --format={{.State.Status}}
	I0620 18:45:26.431602  474995 cli_runner.go:164] Run: docker container inspect old-k8s-version-337794 --format={{.State.Status}}
	I0620 18:45:26.435116  474995 out.go:177] * Verifying Kubernetes components...
	I0620 18:45:26.437040  474995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0620 18:45:26.488411  474995 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0620 18:45:26.490418  474995 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0620 18:45:26.492046  474995 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-337794"
	W0620 18:45:26.492066  474995 addons.go:243] addon default-storageclass should already be in state true
	I0620 18:45:26.492092  474995 host.go:66] Checking if "old-k8s-version-337794" exists ...
	I0620 18:45:26.492499  474995 cli_runner.go:164] Run: docker container inspect old-k8s-version-337794 --format={{.State.Status}}
	I0620 18:45:26.495167  474995 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0620 18:45:26.495203  474995 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0620 18:45:26.495278  474995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-337794
	I0620 18:45:26.501450  474995 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0620 18:45:26.503296  474995 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0620 18:45:26.503317  474995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0620 18:45:26.503430  474995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-337794
	I0620 18:45:26.515125  474995 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0620 18:45:26.519053  474995 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0620 18:45:26.519084  474995 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0620 18:45:26.519165  474995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-337794
	I0620 18:45:26.567210  474995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/19106-274269/.minikube/machines/old-k8s-version-337794/id_rsa Username:docker}
	I0620 18:45:26.569349  474995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/19106-274269/.minikube/machines/old-k8s-version-337794/id_rsa Username:docker}
	I0620 18:45:26.581951  474995 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0620 18:45:26.581972  474995 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0620 18:45:26.582035  474995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-337794
	I0620 18:45:26.599286  474995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/19106-274269/.minikube/machines/old-k8s-version-337794/id_rsa Username:docker}
	I0620 18:45:26.616911  474995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/19106-274269/.minikube/machines/old-k8s-version-337794/id_rsa Username:docker}
	I0620 18:45:26.701523  474995 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0620 18:45:26.739913  474995 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-337794" to be "Ready" ...
	I0620 18:45:26.797904  474995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0620 18:45:26.825776  474995 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0620 18:45:26.825851  474995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0620 18:45:26.832267  474995 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0620 18:45:26.832348  474995 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0620 18:45:26.895457  474995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0620 18:45:26.896096  474995 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0620 18:45:26.896145  474995 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0620 18:45:26.903787  474995 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0620 18:45:26.903861  474995 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0620 18:45:26.996943  474995 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0620 18:45:26.997023  474995 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0620 18:45:27.014223  474995 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0620 18:45:27.014300  474995 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0620 18:45:27.084473  474995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0620 18:45:27.094655  474995 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0620 18:45:27.094724  474995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W0620 18:45:27.124592  474995 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0620 18:45:27.124673  474995 retry.go:31] will retry after 370.863316ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0620 18:45:27.202485  474995 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0620 18:45:27.202556  474995 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W0620 18:45:27.220699  474995 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0620 18:45:27.220807  474995 retry.go:31] will retry after 366.944418ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0620 18:45:27.239695  474995 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0620 18:45:27.239771  474995 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0620 18:45:27.265376  474995 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0620 18:45:27.265451  474995 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	W0620 18:45:27.309896  474995 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0620 18:45:27.309977  474995 retry.go:31] will retry after 256.675444ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0620 18:45:27.321553  474995 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0620 18:45:27.321620  474995 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0620 18:45:27.340045  474995 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0620 18:45:27.340117  474995 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0620 18:45:27.358641  474995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0620 18:45:27.446866  474995 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0620 18:45:27.446953  474995 retry.go:31] will retry after 132.849409ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0620 18:45:27.496156  474995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0620 18:45:27.566796  474995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0620 18:45:27.570594  474995 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0620 18:45:27.570626  474995 retry.go:31] will retry after 294.788899ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0620 18:45:27.580846  474995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0620 18:45:27.588102  474995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0620 18:45:27.778661  474995 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0620 18:45:27.778692  474995 retry.go:31] will retry after 482.744767ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0620 18:45:27.845901  474995 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0620 18:45:27.845985  474995 retry.go:31] will retry after 375.571167ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0620 18:45:27.845940  474995 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0620 18:45:27.846051  474995 retry.go:31] will retry after 481.340294ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0620 18:45:27.866025  474995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0620 18:45:27.961191  474995 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0620 18:45:27.961236  474995 retry.go:31] will retry after 291.140602ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0620 18:45:28.222757  474995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0620 18:45:28.252544  474995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0620 18:45:28.261944  474995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0620 18:45:28.328293  474995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0620 18:45:28.338737  474995 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0620 18:45:28.338771  474995 retry.go:31] will retry after 772.454599ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0620 18:45:28.517409  474995 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0620 18:45:28.517444  474995 retry.go:31] will retry after 1.170721411s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0620 18:45:28.594907  474995 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0620 18:45:28.594953  474995 retry.go:31] will retry after 412.597153ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0620 18:45:28.605949  474995 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0620 18:45:28.605992  474995 retry.go:31] will retry after 432.83986ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0620 18:45:28.740550  474995 node_ready.go:53] error getting node "old-k8s-version-337794": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-337794": dial tcp 192.168.85.2:8443: connect: connection refused
	I0620 18:45:29.008077  474995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0620 18:45:29.039401  474995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0620 18:45:29.111830  474995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0620 18:45:29.112129  474995 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0620 18:45:29.112170  474995 retry.go:31] will retry after 995.935872ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0620 18:45:29.278142  474995 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0620 18:45:29.278177  474995 retry.go:31] will retry after 735.92967ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0620 18:45:29.299345  474995 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0620 18:45:29.299391  474995 retry.go:31] will retry after 545.493673ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0620 18:45:29.688909  474995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0620 18:45:29.790740  474995 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0620 18:45:29.790805  474995 retry.go:31] will retry after 979.605784ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0620 18:45:29.845074  474995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0620 18:45:29.942172  474995 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0620 18:45:29.942203  474995 retry.go:31] will retry after 750.943794ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0620 18:45:30.015196  474995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0620 18:45:30.108935  474995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0620 18:45:30.134743  474995 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0620 18:45:30.134793  474995 retry.go:31] will retry after 1.378417129s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0620 18:45:30.228241  474995 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0620 18:45:30.228277  474995 retry.go:31] will retry after 1.275462872s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0620 18:45:30.693324  474995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0620 18:45:30.740812  474995 node_ready.go:53] error getting node "old-k8s-version-337794": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-337794": dial tcp 192.168.85.2:8443: connect: connection refused
	I0620 18:45:30.771063  474995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0620 18:45:30.792951  474995 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0620 18:45:30.792983  474995 retry.go:31] will retry after 1.681329409s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0620 18:45:30.872255  474995 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0620 18:45:30.872289  474995 retry.go:31] will retry after 2.702815367s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0620 18:45:31.504616  474995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0620 18:45:31.513949  474995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0620 18:45:31.656425  474995 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0620 18:45:31.656458  474995 retry.go:31] will retry after 1.832884674s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0620 18:45:31.701709  474995 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0620 18:45:31.701743  474995 retry.go:31] will retry after 2.702565175s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0620 18:45:32.475182  474995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0620 18:45:32.623532  474995 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0620 18:45:32.623568  474995 retry.go:31] will retry after 4.016323537s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0620 18:45:32.741185  474995 node_ready.go:53] error getting node "old-k8s-version-337794": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-337794": dial tcp 192.168.85.2:8443: connect: connection refused
	I0620 18:45:33.489919  474995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0620 18:45:33.576191  474995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0620 18:45:33.673774  474995 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0620 18:45:33.673807  474995 retry.go:31] will retry after 1.869626572s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0620 18:45:33.745585  474995 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0620 18:45:33.745620  474995 retry.go:31] will retry after 3.202786947s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0620 18:45:34.404791  474995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0620 18:45:34.577986  474995 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0620 18:45:34.578018  474995 retry.go:31] will retry after 3.614395502s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0620 18:45:35.241210  474995 node_ready.go:53] error getting node "old-k8s-version-337794": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-337794": dial tcp 192.168.85.2:8443: connect: connection refused
	I0620 18:45:35.543705  474995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0620 18:45:36.116597  474995 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0620 18:45:36.116631  474995 retry.go:31] will retry after 6.227477066s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0620 18:45:36.640956  474995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0620 18:45:36.949070  474995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0620 18:45:38.192841  474995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0620 18:45:42.344812  474995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0620 18:45:47.064762  474995 node_ready.go:49] node "old-k8s-version-337794" has status "Ready":"True"
	I0620 18:45:47.064795  474995 node_ready.go:38] duration metric: took 20.32479349s for node "old-k8s-version-337794" to be "Ready" ...
	I0620 18:45:47.064811  474995 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0620 18:45:47.166432  474995 pod_ready.go:78] waiting up to 6m0s for pod "coredns-74ff55c5b-2mwgh" in "kube-system" namespace to be "Ready" ...
	I0620 18:45:47.342673  474995 pod_ready.go:92] pod "coredns-74ff55c5b-2mwgh" in "kube-system" namespace has status "Ready":"True"
	I0620 18:45:47.342754  474995 pod_ready.go:81] duration metric: took 176.244701ms for pod "coredns-74ff55c5b-2mwgh" in "kube-system" namespace to be "Ready" ...
	I0620 18:45:47.342783  474995 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-337794" in "kube-system" namespace to be "Ready" ...
	I0620 18:45:48.697923  474995 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (12.056913146s)
	I0620 18:45:48.698175  474995 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (11.749067155s)
	I0620 18:45:48.698217  474995 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (10.505344209s)
	I0620 18:45:48.698436  474995 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.353593865s)
	I0620 18:45:48.698482  474995 addons.go:475] Verifying addon metrics-server=true in "old-k8s-version-337794"
	I0620 18:45:48.699880  474995 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-337794 addons enable metrics-server
	
	I0620 18:45:48.716855  474995 out.go:177] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	I0620 18:45:48.719174  474995 addons.go:510] duration metric: took 22.289537511s for enable addons: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
	I0620 18:45:49.349145  474995 pod_ready.go:102] pod "etcd-old-k8s-version-337794" in "kube-system" namespace has status "Ready":"False"
	I0620 18:45:51.349297  474995 pod_ready.go:102] pod "etcd-old-k8s-version-337794" in "kube-system" namespace has status "Ready":"False"
	I0620 18:45:53.357628  474995 pod_ready.go:102] pod "etcd-old-k8s-version-337794" in "kube-system" namespace has status "Ready":"False"
	I0620 18:45:55.849455  474995 pod_ready.go:102] pod "etcd-old-k8s-version-337794" in "kube-system" namespace has status "Ready":"False"
	I0620 18:45:58.348764  474995 pod_ready.go:102] pod "etcd-old-k8s-version-337794" in "kube-system" namespace has status "Ready":"False"
	I0620 18:46:00.353548  474995 pod_ready.go:102] pod "etcd-old-k8s-version-337794" in "kube-system" namespace has status "Ready":"False"
	I0620 18:46:02.848981  474995 pod_ready.go:102] pod "etcd-old-k8s-version-337794" in "kube-system" namespace has status "Ready":"False"
	I0620 18:46:05.349028  474995 pod_ready.go:102] pod "etcd-old-k8s-version-337794" in "kube-system" namespace has status "Ready":"False"
	I0620 18:46:07.349751  474995 pod_ready.go:102] pod "etcd-old-k8s-version-337794" in "kube-system" namespace has status "Ready":"False"
	I0620 18:46:09.350319  474995 pod_ready.go:102] pod "etcd-old-k8s-version-337794" in "kube-system" namespace has status "Ready":"False"
	I0620 18:46:11.849974  474995 pod_ready.go:102] pod "etcd-old-k8s-version-337794" in "kube-system" namespace has status "Ready":"False"
	I0620 18:46:14.348980  474995 pod_ready.go:102] pod "etcd-old-k8s-version-337794" in "kube-system" namespace has status "Ready":"False"
	I0620 18:46:16.349942  474995 pod_ready.go:102] pod "etcd-old-k8s-version-337794" in "kube-system" namespace has status "Ready":"False"
	I0620 18:46:18.849813  474995 pod_ready.go:102] pod "etcd-old-k8s-version-337794" in "kube-system" namespace has status "Ready":"False"
	I0620 18:46:20.863336  474995 pod_ready.go:102] pod "etcd-old-k8s-version-337794" in "kube-system" namespace has status "Ready":"False"
	I0620 18:46:23.349646  474995 pod_ready.go:102] pod "etcd-old-k8s-version-337794" in "kube-system" namespace has status "Ready":"False"
	I0620 18:46:25.849710  474995 pod_ready.go:102] pod "etcd-old-k8s-version-337794" in "kube-system" namespace has status "Ready":"False"
	I0620 18:46:28.348994  474995 pod_ready.go:102] pod "etcd-old-k8s-version-337794" in "kube-system" namespace has status "Ready":"False"
	I0620 18:46:30.848703  474995 pod_ready.go:102] pod "etcd-old-k8s-version-337794" in "kube-system" namespace has status "Ready":"False"
	I0620 18:46:32.849274  474995 pod_ready.go:102] pod "etcd-old-k8s-version-337794" in "kube-system" namespace has status "Ready":"False"
	I0620 18:46:34.849525  474995 pod_ready.go:102] pod "etcd-old-k8s-version-337794" in "kube-system" namespace has status "Ready":"False"
	I0620 18:46:37.348072  474995 pod_ready.go:102] pod "etcd-old-k8s-version-337794" in "kube-system" namespace has status "Ready":"False"
	I0620 18:46:39.349427  474995 pod_ready.go:102] pod "etcd-old-k8s-version-337794" in "kube-system" namespace has status "Ready":"False"
	I0620 18:46:41.849193  474995 pod_ready.go:102] pod "etcd-old-k8s-version-337794" in "kube-system" namespace has status "Ready":"False"
	I0620 18:46:43.852419  474995 pod_ready.go:102] pod "etcd-old-k8s-version-337794" in "kube-system" namespace has status "Ready":"False"
	I0620 18:46:46.348582  474995 pod_ready.go:102] pod "etcd-old-k8s-version-337794" in "kube-system" namespace has status "Ready":"False"
	I0620 18:46:48.348879  474995 pod_ready.go:102] pod "etcd-old-k8s-version-337794" in "kube-system" namespace has status "Ready":"False"
	I0620 18:46:50.848699  474995 pod_ready.go:102] pod "etcd-old-k8s-version-337794" in "kube-system" namespace has status "Ready":"False"
	I0620 18:46:52.849772  474995 pod_ready.go:102] pod "etcd-old-k8s-version-337794" in "kube-system" namespace has status "Ready":"False"
	I0620 18:46:55.356079  474995 pod_ready.go:102] pod "etcd-old-k8s-version-337794" in "kube-system" namespace has status "Ready":"False"
	I0620 18:46:57.853684  474995 pod_ready.go:102] pod "etcd-old-k8s-version-337794" in "kube-system" namespace has status "Ready":"False"
	I0620 18:47:00.349893  474995 pod_ready.go:102] pod "etcd-old-k8s-version-337794" in "kube-system" namespace has status "Ready":"False"
	I0620 18:47:02.850400  474995 pod_ready.go:102] pod "etcd-old-k8s-version-337794" in "kube-system" namespace has status "Ready":"False"
	I0620 18:47:05.348417  474995 pod_ready.go:102] pod "etcd-old-k8s-version-337794" in "kube-system" namespace has status "Ready":"False"
	I0620 18:47:07.428563  474995 pod_ready.go:102] pod "etcd-old-k8s-version-337794" in "kube-system" namespace has status "Ready":"False"
	I0620 18:47:09.851521  474995 pod_ready.go:102] pod "etcd-old-k8s-version-337794" in "kube-system" namespace has status "Ready":"False"
	I0620 18:47:12.349832  474995 pod_ready.go:102] pod "etcd-old-k8s-version-337794" in "kube-system" namespace has status "Ready":"False"
	I0620 18:47:14.849037  474995 pod_ready.go:102] pod "etcd-old-k8s-version-337794" in "kube-system" namespace has status "Ready":"False"
	I0620 18:47:15.849745  474995 pod_ready.go:92] pod "etcd-old-k8s-version-337794" in "kube-system" namespace has status "Ready":"True"
	I0620 18:47:15.849770  474995 pod_ready.go:81] duration metric: took 1m28.506965279s for pod "etcd-old-k8s-version-337794" in "kube-system" namespace to be "Ready" ...
	I0620 18:47:15.849786  474995 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-337794" in "kube-system" namespace to be "Ready" ...
	I0620 18:47:15.855468  474995 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-337794" in "kube-system" namespace has status "Ready":"True"
	I0620 18:47:15.855494  474995 pod_ready.go:81] duration metric: took 5.700262ms for pod "kube-apiserver-old-k8s-version-337794" in "kube-system" namespace to be "Ready" ...
	I0620 18:47:15.855508  474995 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-337794" in "kube-system" namespace to be "Ready" ...
	I0620 18:47:15.861138  474995 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-337794" in "kube-system" namespace has status "Ready":"True"
	I0620 18:47:15.861163  474995 pod_ready.go:81] duration metric: took 5.647503ms for pod "kube-controller-manager-old-k8s-version-337794" in "kube-system" namespace to be "Ready" ...
	I0620 18:47:15.861175  474995 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-h4r8m" in "kube-system" namespace to be "Ready" ...
	I0620 18:47:15.866333  474995 pod_ready.go:92] pod "kube-proxy-h4r8m" in "kube-system" namespace has status "Ready":"True"
	I0620 18:47:15.866357  474995 pod_ready.go:81] duration metric: took 5.175308ms for pod "kube-proxy-h4r8m" in "kube-system" namespace to be "Ready" ...
	I0620 18:47:15.866369  474995 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-337794" in "kube-system" namespace to be "Ready" ...
	I0620 18:47:15.872091  474995 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-337794" in "kube-system" namespace has status "Ready":"True"
	I0620 18:47:15.872116  474995 pod_ready.go:81] duration metric: took 5.738793ms for pod "kube-scheduler-old-k8s-version-337794" in "kube-system" namespace to be "Ready" ...
	I0620 18:47:15.872128  474995 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace to be "Ready" ...
	I0620 18:47:17.879482  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:47:20.377943  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:47:22.878380  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:47:24.879059  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:47:27.378857  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:47:29.880671  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:47:32.377891  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:47:34.378925  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:47:36.879091  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:47:38.879398  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:47:41.378023  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:47:43.378583  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:47:45.878410  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:47:47.878747  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:47:50.378211  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:47:52.878819  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:47:55.377953  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:47:57.378340  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:47:59.879193  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:48:02.383795  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:48:04.878354  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:48:07.378889  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:48:09.878115  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:48:11.878787  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:48:14.378388  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:48:16.878765  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:48:19.378075  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:48:21.878074  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:48:23.878364  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:48:25.878833  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:48:28.378795  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:48:30.878837  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:48:32.879215  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:48:34.879379  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:48:37.378069  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:48:39.378631  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:48:41.878869  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:48:43.878920  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:48:46.377960  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:48:48.878314  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:48:51.378672  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:48:53.879513  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:48:56.378225  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:48:58.437149  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:49:00.879888  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:49:03.378362  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:49:05.878776  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:49:08.377726  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:49:10.378674  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:49:12.878146  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:49:14.878576  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:49:16.878873  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:49:19.378184  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:49:21.878629  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:49:23.879303  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:49:26.378224  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:49:28.877767  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:49:30.878808  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:49:33.378852  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:49:35.879565  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:49:38.378051  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:49:40.877938  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:49:42.878558  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:49:45.377841  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:49:47.378330  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:49:49.878410  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:49:52.378806  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:49:54.878721  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:49:57.378940  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:49:59.879227  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:50:02.377976  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:50:04.378539  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:50:06.878453  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:50:09.378492  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:50:11.378532  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:50:13.883691  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:50:16.378521  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:50:18.878844  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:50:21.378500  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:50:23.378692  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:50:25.378781  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:50:27.380252  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:50:29.880359  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:50:32.381202  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:50:34.879053  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:50:37.383884  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:50:39.399920  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:50:41.878946  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:50:44.377798  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:50:46.378331  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:50:48.378534  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:50:50.878749  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:50:53.378809  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:50:55.380295  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:50:57.884774  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:51:00.379513  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:51:02.878956  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:51:05.378040  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:51:07.378498  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:51:09.379657  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:51:11.878526  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:51:14.378391  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:51:15.878416  474995 pod_ready.go:81] duration metric: took 4m0.006273918s for pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace to be "Ready" ...
	E0620 18:51:15.878443  474995 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0620 18:51:15.878452  474995 pod_ready.go:38] duration metric: took 5m28.813628413s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0620 18:51:15.878466  474995 api_server.go:52] waiting for apiserver process to appear ...
	I0620 18:51:15.878494  474995 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0620 18:51:15.878557  474995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0620 18:51:15.917287  474995 cri.go:89] found id: "ab9a5dc87140998b57eded669bdfe78785f54034fe3076c3679faee0c5e8b168"
	I0620 18:51:15.917312  474995 cri.go:89] found id: "cfdcf7bafc98db0ed8b117a7f81af17f4aa0ffe0d900af8fb469323b24ada207"
	I0620 18:51:15.917317  474995 cri.go:89] found id: ""
	I0620 18:51:15.917324  474995 logs.go:276] 2 containers: [ab9a5dc87140998b57eded669bdfe78785f54034fe3076c3679faee0c5e8b168 cfdcf7bafc98db0ed8b117a7f81af17f4aa0ffe0d900af8fb469323b24ada207]
	I0620 18:51:15.917380  474995 ssh_runner.go:195] Run: which crictl
	I0620 18:51:15.921092  474995 ssh_runner.go:195] Run: which crictl
	I0620 18:51:15.924286  474995 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0620 18:51:15.924366  474995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0620 18:51:15.962596  474995 cri.go:89] found id: "d10f227a579ece3ba7f135984b5533ab1954b830dbfcff6951b3b52ca343d8c6"
	I0620 18:51:15.962617  474995 cri.go:89] found id: "78a0fe9b19212a5f753e82266fc6eda58535c93f7b45a9f7ae1027d529ba1b9f"
	I0620 18:51:15.962622  474995 cri.go:89] found id: ""
	I0620 18:51:15.962629  474995 logs.go:276] 2 containers: [d10f227a579ece3ba7f135984b5533ab1954b830dbfcff6951b3b52ca343d8c6 78a0fe9b19212a5f753e82266fc6eda58535c93f7b45a9f7ae1027d529ba1b9f]
	I0620 18:51:15.962688  474995 ssh_runner.go:195] Run: which crictl
	I0620 18:51:15.966456  474995 ssh_runner.go:195] Run: which crictl
	I0620 18:51:15.969955  474995 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0620 18:51:15.970053  474995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0620 18:51:16.011278  474995 cri.go:89] found id: "3b0caff456f550681c8dc6f7ce0c9af69317ec12ac39d8a6e1fa2040ea4c8a50"
	I0620 18:51:16.011303  474995 cri.go:89] found id: "41ba1c27ef09f4269353d6500282257c6dc0ee5cc4906cc9d3f99fdacb2835c1"
	I0620 18:51:16.011308  474995 cri.go:89] found id: ""
	I0620 18:51:16.011316  474995 logs.go:276] 2 containers: [3b0caff456f550681c8dc6f7ce0c9af69317ec12ac39d8a6e1fa2040ea4c8a50 41ba1c27ef09f4269353d6500282257c6dc0ee5cc4906cc9d3f99fdacb2835c1]
	I0620 18:51:16.011380  474995 ssh_runner.go:195] Run: which crictl
	I0620 18:51:16.020059  474995 ssh_runner.go:195] Run: which crictl
	I0620 18:51:16.023712  474995 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0620 18:51:16.023787  474995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0620 18:51:16.066563  474995 cri.go:89] found id: "216bb91d7edcfdb7e26e452ac2980c11fb6d169bc9ec8ff9d4125cefc89b4af8"
	I0620 18:51:16.066595  474995 cri.go:89] found id: "e895300ffd7e6f784d9567baacfb9b233853b31245990e391675289e62ad7659"
	I0620 18:51:16.066604  474995 cri.go:89] found id: ""
	I0620 18:51:16.066611  474995 logs.go:276] 2 containers: [216bb91d7edcfdb7e26e452ac2980c11fb6d169bc9ec8ff9d4125cefc89b4af8 e895300ffd7e6f784d9567baacfb9b233853b31245990e391675289e62ad7659]
	I0620 18:51:16.066671  474995 ssh_runner.go:195] Run: which crictl
	I0620 18:51:16.070402  474995 ssh_runner.go:195] Run: which crictl
	I0620 18:51:16.074053  474995 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0620 18:51:16.074152  474995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0620 18:51:16.120292  474995 cri.go:89] found id: "fd105e227c64f742488149d7a7ef12f65582c9a17ebf40af0d31772fff23b850"
	I0620 18:51:16.120316  474995 cri.go:89] found id: "0084076706589b3159aa05b37a0ef12c102644140788b191724b0f7267fd4e3c"
	I0620 18:51:16.120322  474995 cri.go:89] found id: ""
	I0620 18:51:16.120329  474995 logs.go:276] 2 containers: [fd105e227c64f742488149d7a7ef12f65582c9a17ebf40af0d31772fff23b850 0084076706589b3159aa05b37a0ef12c102644140788b191724b0f7267fd4e3c]
	I0620 18:51:16.120414  474995 ssh_runner.go:195] Run: which crictl
	I0620 18:51:16.124444  474995 ssh_runner.go:195] Run: which crictl
	I0620 18:51:16.128126  474995 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0620 18:51:16.128201  474995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0620 18:51:16.165780  474995 cri.go:89] found id: "ae831abf0d9b3aec7fd99a96e612f60ca28994573225480bd39d669115fd9323"
	I0620 18:51:16.165852  474995 cri.go:89] found id: "6ce85447f9024fd431a26a603f1a93ee2dd0f45da6feddb6ce6fee810b5fdd37"
	I0620 18:51:16.165887  474995 cri.go:89] found id: ""
	I0620 18:51:16.165913  474995 logs.go:276] 2 containers: [ae831abf0d9b3aec7fd99a96e612f60ca28994573225480bd39d669115fd9323 6ce85447f9024fd431a26a603f1a93ee2dd0f45da6feddb6ce6fee810b5fdd37]
	I0620 18:51:16.166004  474995 ssh_runner.go:195] Run: which crictl
	I0620 18:51:16.169944  474995 ssh_runner.go:195] Run: which crictl
	I0620 18:51:16.173672  474995 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0620 18:51:16.173796  474995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0620 18:51:16.209860  474995 cri.go:89] found id: "e098983ce65fd9a05371c1e9237d75438fa40f748e0a679589f472f0a39a08e8"
	I0620 18:51:16.209887  474995 cri.go:89] found id: "13a0bb253788862ae22e6058b7f190fba11dd8db1d86a9df13f8b2aeb42fcecb"
	I0620 18:51:16.209904  474995 cri.go:89] found id: ""
	I0620 18:51:16.209913  474995 logs.go:276] 2 containers: [e098983ce65fd9a05371c1e9237d75438fa40f748e0a679589f472f0a39a08e8 13a0bb253788862ae22e6058b7f190fba11dd8db1d86a9df13f8b2aeb42fcecb]
	I0620 18:51:16.210006  474995 ssh_runner.go:195] Run: which crictl
	I0620 18:51:16.214081  474995 ssh_runner.go:195] Run: which crictl
	I0620 18:51:16.217615  474995 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0620 18:51:16.217705  474995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0620 18:51:16.257685  474995 cri.go:89] found id: "9b335eeb8b67e4c38ce7a7e9a7bc9f20eb7a66185e54a8a71a8ecb76c3a9aa5f"
	I0620 18:51:16.257708  474995 cri.go:89] found id: ""
	I0620 18:51:16.257717  474995 logs.go:276] 1 containers: [9b335eeb8b67e4c38ce7a7e9a7bc9f20eb7a66185e54a8a71a8ecb76c3a9aa5f]
	I0620 18:51:16.257793  474995 ssh_runner.go:195] Run: which crictl
	I0620 18:51:16.261284  474995 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0620 18:51:16.261408  474995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0620 18:51:16.314752  474995 cri.go:89] found id: "eed6e9e374d62971b45702bee247f112b7994fb1614a6637e94d738953631a39"
	I0620 18:51:16.314778  474995 cri.go:89] found id: "2361035748bd059e7b4f11b7715ec3ccdb81adbf542d803180ebb8bec389d445"
	I0620 18:51:16.314783  474995 cri.go:89] found id: ""
	I0620 18:51:16.314790  474995 logs.go:276] 2 containers: [eed6e9e374d62971b45702bee247f112b7994fb1614a6637e94d738953631a39 2361035748bd059e7b4f11b7715ec3ccdb81adbf542d803180ebb8bec389d445]
	I0620 18:51:16.314855  474995 ssh_runner.go:195] Run: which crictl
	I0620 18:51:16.318775  474995 ssh_runner.go:195] Run: which crictl
	I0620 18:51:16.322870  474995 logs.go:123] Gathering logs for storage-provisioner [eed6e9e374d62971b45702bee247f112b7994fb1614a6637e94d738953631a39] ...
	I0620 18:51:16.322935  474995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eed6e9e374d62971b45702bee247f112b7994fb1614a6637e94d738953631a39"
	I0620 18:51:16.363609  474995 logs.go:123] Gathering logs for storage-provisioner [2361035748bd059e7b4f11b7715ec3ccdb81adbf542d803180ebb8bec389d445] ...
	I0620 18:51:16.363639  474995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2361035748bd059e7b4f11b7715ec3ccdb81adbf542d803180ebb8bec389d445"
	I0620 18:51:16.401395  474995 logs.go:123] Gathering logs for containerd ...
	I0620 18:51:16.401426  474995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0620 18:51:16.461054  474995 logs.go:123] Gathering logs for etcd [d10f227a579ece3ba7f135984b5533ab1954b830dbfcff6951b3b52ca343d8c6] ...
	I0620 18:51:16.461090  474995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d10f227a579ece3ba7f135984b5533ab1954b830dbfcff6951b3b52ca343d8c6"
	I0620 18:51:16.502734  474995 logs.go:123] Gathering logs for etcd [78a0fe9b19212a5f753e82266fc6eda58535c93f7b45a9f7ae1027d529ba1b9f] ...
	I0620 18:51:16.502766  474995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78a0fe9b19212a5f753e82266fc6eda58535c93f7b45a9f7ae1027d529ba1b9f"
	I0620 18:51:16.547328  474995 logs.go:123] Gathering logs for coredns [41ba1c27ef09f4269353d6500282257c6dc0ee5cc4906cc9d3f99fdacb2835c1] ...
	I0620 18:51:16.547358  474995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 41ba1c27ef09f4269353d6500282257c6dc0ee5cc4906cc9d3f99fdacb2835c1"
	I0620 18:51:16.586152  474995 logs.go:123] Gathering logs for kube-controller-manager [6ce85447f9024fd431a26a603f1a93ee2dd0f45da6feddb6ce6fee810b5fdd37] ...
	I0620 18:51:16.586181  474995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ce85447f9024fd431a26a603f1a93ee2dd0f45da6feddb6ce6fee810b5fdd37"
	I0620 18:51:16.641778  474995 logs.go:123] Gathering logs for kubelet ...
	I0620 18:51:16.641814  474995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0620 18:51:16.701253  474995 logs.go:138] Found kubelet problem: Jun 20 18:45:47 old-k8s-version-337794 kubelet[665]: E0620 18:45:47.184128     665 reflector.go:138] object-"kube-system"/"metrics-server-token-5jskx": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-5jskx" is forbidden: User "system:node:old-k8s-version-337794" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-337794' and this object
	W0620 18:51:16.701515  474995 logs.go:138] Found kubelet problem: Jun 20 18:45:47 old-k8s-version-337794 kubelet[665]: E0620 18:45:47.184290     665 reflector.go:138] object-"kube-system"/"kindnet-token-zqbvn": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-zqbvn" is forbidden: User "system:node:old-k8s-version-337794" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-337794' and this object
	W0620 18:51:16.701729  474995 logs.go:138] Found kubelet problem: Jun 20 18:45:47 old-k8s-version-337794 kubelet[665]: E0620 18:45:47.184370     665 reflector.go:138] object-"default"/"default-token-lw785": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-lw785" is forbidden: User "system:node:old-k8s-version-337794" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-337794' and this object
	W0620 18:51:16.701946  474995 logs.go:138] Found kubelet problem: Jun 20 18:45:47 old-k8s-version-337794 kubelet[665]: E0620 18:45:47.184441     665 reflector.go:138] object-"kube-system"/"kube-proxy-token-6scpt": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-6scpt" is forbidden: User "system:node:old-k8s-version-337794" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-337794' and this object
	W0620 18:51:16.702215  474995 logs.go:138] Found kubelet problem: Jun 20 18:45:47 old-k8s-version-337794 kubelet[665]: E0620 18:45:47.184579     665 reflector.go:138] object-"kube-system"/"storage-provisioner-token-dfq7m": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-dfq7m" is forbidden: User "system:node:old-k8s-version-337794" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-337794' and this object
	W0620 18:51:16.702446  474995 logs.go:138] Found kubelet problem: Jun 20 18:45:47 old-k8s-version-337794 kubelet[665]: E0620 18:45:47.184668     665 reflector.go:138] object-"kube-system"/"coredns-token-785nz": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-785nz" is forbidden: User "system:node:old-k8s-version-337794" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-337794' and this object
	W0620 18:51:16.702665  474995 logs.go:138] Found kubelet problem: Jun 20 18:45:47 old-k8s-version-337794 kubelet[665]: E0620 18:45:47.184747     665 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-337794" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-337794' and this object
	W0620 18:51:16.702866  474995 logs.go:138] Found kubelet problem: Jun 20 18:45:47 old-k8s-version-337794 kubelet[665]: E0620 18:45:47.184827     665 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-337794" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-337794' and this object
	W0620 18:51:16.710426  474995 logs.go:138] Found kubelet problem: Jun 20 18:45:49 old-k8s-version-337794 kubelet[665]: E0620 18:45:49.248664     665 pod_workers.go:191] Error syncing pod 8ce729fc-f814-4c22-b0f3-aeb9f468865b ("metrics-server-9975d5f86-s95qr_kube-system(8ce729fc-f814-4c22-b0f3-aeb9f468865b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0620 18:51:16.712632  474995 logs.go:138] Found kubelet problem: Jun 20 18:45:49 old-k8s-version-337794 kubelet[665]: E0620 18:45:49.863806     665 pod_workers.go:191] Error syncing pod 8ce729fc-f814-4c22-b0f3-aeb9f468865b ("metrics-server-9975d5f86-s95qr_kube-system(8ce729fc-f814-4c22-b0f3-aeb9f468865b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0620 18:51:16.715551  474995 logs.go:138] Found kubelet problem: Jun 20 18:46:03 old-k8s-version-337794 kubelet[665]: E0620 18:46:03.515344     665 pod_workers.go:191] Error syncing pod 8ce729fc-f814-4c22-b0f3-aeb9f468865b ("metrics-server-9975d5f86-s95qr_kube-system(8ce729fc-f814-4c22-b0f3-aeb9f468865b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0620 18:51:16.715982  474995 logs.go:138] Found kubelet problem: Jun 20 18:46:04 old-k8s-version-337794 kubelet[665]: E0620 18:46:04.480260     665 reflector.go:138] object-"kubernetes-dashboard"/"kubernetes-dashboard-token-nkcgq": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kubernetes-dashboard-token-nkcgq" is forbidden: User "system:node:old-k8s-version-337794" cannot list resource "secrets" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'old-k8s-version-337794' and this object
	W0620 18:51:16.718088  474995 logs.go:138] Found kubelet problem: Jun 20 18:46:11 old-k8s-version-337794 kubelet[665]: E0620 18:46:11.938046     665 pod_workers.go:191] Error syncing pod be9a2daf-d4c5-4163-815e-337ffd00fcb7 ("dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"
	W0620 18:51:16.718430  474995 logs.go:138] Found kubelet problem: Jun 20 18:46:12 old-k8s-version-337794 kubelet[665]: E0620 18:46:12.936968     665 pod_workers.go:191] Error syncing pod be9a2daf-d4c5-4163-815e-337ffd00fcb7 ("dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"
	W0620 18:51:16.718764  474995 logs.go:138] Found kubelet problem: Jun 20 18:46:13 old-k8s-version-337794 kubelet[665]: E0620 18:46:13.943072     665 pod_workers.go:191] Error syncing pod be9a2daf-d4c5-4163-815e-337ffd00fcb7 ("dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"
	W0620 18:51:16.719299  474995 logs.go:138] Found kubelet problem: Jun 20 18:46:18 old-k8s-version-337794 kubelet[665]: E0620 18:46:18.503586     665 pod_workers.go:191] Error syncing pod 8ce729fc-f814-4c22-b0f3-aeb9f468865b ("metrics-server-9975d5f86-s95qr_kube-system(8ce729fc-f814-4c22-b0f3-aeb9f468865b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0620 18:51:16.720222  474995 logs.go:138] Found kubelet problem: Jun 20 18:46:25 old-k8s-version-337794 kubelet[665]: E0620 18:46:25.975971     665 pod_workers.go:191] Error syncing pod be9a2daf-d4c5-4163-815e-337ffd00fcb7 ("dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"
	W0620 18:51:16.722681  474995 logs.go:138] Found kubelet problem: Jun 20 18:46:31 old-k8s-version-337794 kubelet[665]: E0620 18:46:31.512372     665 pod_workers.go:191] Error syncing pod 8ce729fc-f814-4c22-b0f3-aeb9f468865b ("metrics-server-9975d5f86-s95qr_kube-system(8ce729fc-f814-4c22-b0f3-aeb9f468865b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0620 18:51:16.723015  474995 logs.go:138] Found kubelet problem: Jun 20 18:46:33 old-k8s-version-337794 kubelet[665]: E0620 18:46:33.400781     665 pod_workers.go:191] Error syncing pod be9a2daf-d4c5-4163-815e-337ffd00fcb7 ("dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"
	W0620 18:51:16.723204  474995 logs.go:138] Found kubelet problem: Jun 20 18:46:45 old-k8s-version-337794 kubelet[665]: E0620 18:46:45.503187     665 pod_workers.go:191] Error syncing pod 8ce729fc-f814-4c22-b0f3-aeb9f468865b ("metrics-server-9975d5f86-s95qr_kube-system(8ce729fc-f814-4c22-b0f3-aeb9f468865b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0620 18:51:16.723792  474995 logs.go:138] Found kubelet problem: Jun 20 18:46:49 old-k8s-version-337794 kubelet[665]: E0620 18:46:49.037647     665 pod_workers.go:191] Error syncing pod be9a2daf-d4c5-4163-815e-337ffd00fcb7 ("dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"
	W0620 18:51:16.724119  474995 logs.go:138] Found kubelet problem: Jun 20 18:46:53 old-k8s-version-337794 kubelet[665]: E0620 18:46:53.401356     665 pod_workers.go:191] Error syncing pod be9a2daf-d4c5-4163-815e-337ffd00fcb7 ("dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"
	W0620 18:51:16.724304  474995 logs.go:138] Found kubelet problem: Jun 20 18:46:57 old-k8s-version-337794 kubelet[665]: E0620 18:46:57.503282     665 pod_workers.go:191] Error syncing pod 8ce729fc-f814-4c22-b0f3-aeb9f468865b ("metrics-server-9975d5f86-s95qr_kube-system(8ce729fc-f814-4c22-b0f3-aeb9f468865b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0620 18:51:16.724630  474995 logs.go:138] Found kubelet problem: Jun 20 18:47:07 old-k8s-version-337794 kubelet[665]: E0620 18:47:07.502771     665 pod_workers.go:191] Error syncing pod be9a2daf-d4c5-4163-815e-337ffd00fcb7 ("dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"
	W0620 18:51:16.727084  474995 logs.go:138] Found kubelet problem: Jun 20 18:47:12 old-k8s-version-337794 kubelet[665]: E0620 18:47:12.515321     665 pod_workers.go:191] Error syncing pod 8ce729fc-f814-4c22-b0f3-aeb9f468865b ("metrics-server-9975d5f86-s95qr_kube-system(8ce729fc-f814-4c22-b0f3-aeb9f468865b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0620 18:51:16.727450  474995 logs.go:138] Found kubelet problem: Jun 20 18:47:18 old-k8s-version-337794 kubelet[665]: E0620 18:47:18.503470     665 pod_workers.go:191] Error syncing pod be9a2daf-d4c5-4163-815e-337ffd00fcb7 ("dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"
	W0620 18:51:16.727640  474995 logs.go:138] Found kubelet problem: Jun 20 18:47:27 old-k8s-version-337794 kubelet[665]: E0620 18:47:27.503319     665 pod_workers.go:191] Error syncing pod 8ce729fc-f814-4c22-b0f3-aeb9f468865b ("metrics-server-9975d5f86-s95qr_kube-system(8ce729fc-f814-4c22-b0f3-aeb9f468865b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0620 18:51:16.728229  474995 logs.go:138] Found kubelet problem: Jun 20 18:47:33 old-k8s-version-337794 kubelet[665]: E0620 18:47:33.205812     665 pod_workers.go:191] Error syncing pod be9a2daf-d4c5-4163-815e-337ffd00fcb7 ("dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"
	W0620 18:51:16.728575  474995 logs.go:138] Found kubelet problem: Jun 20 18:47:34 old-k8s-version-337794 kubelet[665]: E0620 18:47:34.209709     665 pod_workers.go:191] Error syncing pod be9a2daf-d4c5-4163-815e-337ffd00fcb7 ("dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"
	W0620 18:51:16.728767  474995 logs.go:138] Found kubelet problem: Jun 20 18:47:41 old-k8s-version-337794 kubelet[665]: E0620 18:47:41.503093     665 pod_workers.go:191] Error syncing pod 8ce729fc-f814-4c22-b0f3-aeb9f468865b ("metrics-server-9975d5f86-s95qr_kube-system(8ce729fc-f814-4c22-b0f3-aeb9f468865b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0620 18:51:16.729094  474995 logs.go:138] Found kubelet problem: Jun 20 18:47:47 old-k8s-version-337794 kubelet[665]: E0620 18:47:47.502809     665 pod_workers.go:191] Error syncing pod be9a2daf-d4c5-4163-815e-337ffd00fcb7 ("dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"
	W0620 18:51:16.729277  474995 logs.go:138] Found kubelet problem: Jun 20 18:47:56 old-k8s-version-337794 kubelet[665]: E0620 18:47:56.504347     665 pod_workers.go:191] Error syncing pod 8ce729fc-f814-4c22-b0f3-aeb9f468865b ("metrics-server-9975d5f86-s95qr_kube-system(8ce729fc-f814-4c22-b0f3-aeb9f468865b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0620 18:51:16.729606  474995 logs.go:138] Found kubelet problem: Jun 20 18:48:02 old-k8s-version-337794 kubelet[665]: E0620 18:48:02.503257     665 pod_workers.go:191] Error syncing pod be9a2daf-d4c5-4163-815e-337ffd00fcb7 ("dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"
	W0620 18:51:16.729797  474995 logs.go:138] Found kubelet problem: Jun 20 18:48:09 old-k8s-version-337794 kubelet[665]: E0620 18:48:09.503212     665 pod_workers.go:191] Error syncing pod 8ce729fc-f814-4c22-b0f3-aeb9f468865b ("metrics-server-9975d5f86-s95qr_kube-system(8ce729fc-f814-4c22-b0f3-aeb9f468865b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0620 18:51:16.730123  474995 logs.go:138] Found kubelet problem: Jun 20 18:48:14 old-k8s-version-337794 kubelet[665]: E0620 18:48:14.504569     665 pod_workers.go:191] Error syncing pod be9a2daf-d4c5-4163-815e-337ffd00fcb7 ("dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"
	W0620 18:51:16.730308  474995 logs.go:138] Found kubelet problem: Jun 20 18:48:24 old-k8s-version-337794 kubelet[665]: E0620 18:48:24.506143     665 pod_workers.go:191] Error syncing pod 8ce729fc-f814-4c22-b0f3-aeb9f468865b ("metrics-server-9975d5f86-s95qr_kube-system(8ce729fc-f814-4c22-b0f3-aeb9f468865b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0620 18:51:16.730637  474995 logs.go:138] Found kubelet problem: Jun 20 18:48:26 old-k8s-version-337794 kubelet[665]: E0620 18:48:26.503482     665 pod_workers.go:191] Error syncing pod be9a2daf-d4c5-4163-815e-337ffd00fcb7 ("dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"
	W0620 18:51:16.730967  474995 logs.go:138] Found kubelet problem: Jun 20 18:48:37 old-k8s-version-337794 kubelet[665]: E0620 18:48:37.502884     665 pod_workers.go:191] Error syncing pod be9a2daf-d4c5-4163-815e-337ffd00fcb7 ("dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"
	W0620 18:51:16.733454  474995 logs.go:138] Found kubelet problem: Jun 20 18:48:38 old-k8s-version-337794 kubelet[665]: E0620 18:48:38.511247     665 pod_workers.go:191] Error syncing pod 8ce729fc-f814-4c22-b0f3-aeb9f468865b ("metrics-server-9975d5f86-s95qr_kube-system(8ce729fc-f814-4c22-b0f3-aeb9f468865b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0620 18:51:16.733791  474995 logs.go:138] Found kubelet problem: Jun 20 18:48:48 old-k8s-version-337794 kubelet[665]: E0620 18:48:48.504010     665 pod_workers.go:191] Error syncing pod be9a2daf-d4c5-4163-815e-337ffd00fcb7 ("dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"
	W0620 18:51:16.733979  474995 logs.go:138] Found kubelet problem: Jun 20 18:48:50 old-k8s-version-337794 kubelet[665]: E0620 18:48:50.506025     665 pod_workers.go:191] Error syncing pod 8ce729fc-f814-4c22-b0f3-aeb9f468865b ("metrics-server-9975d5f86-s95qr_kube-system(8ce729fc-f814-4c22-b0f3-aeb9f468865b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0620 18:51:16.734596  474995 logs.go:138] Found kubelet problem: Jun 20 18:49:01 old-k8s-version-337794 kubelet[665]: E0620 18:49:01.386876     665 pod_workers.go:191] Error syncing pod be9a2daf-d4c5-4163-815e-337ffd00fcb7 ("dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"
	W0620 18:51:16.734926  474995 logs.go:138] Found kubelet problem: Jun 20 18:49:03 old-k8s-version-337794 kubelet[665]: E0620 18:49:03.401266     665 pod_workers.go:191] Error syncing pod be9a2daf-d4c5-4163-815e-337ffd00fcb7 ("dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"
	W0620 18:51:16.735125  474995 logs.go:138] Found kubelet problem: Jun 20 18:49:03 old-k8s-version-337794 kubelet[665]: E0620 18:49:03.503351     665 pod_workers.go:191] Error syncing pod 8ce729fc-f814-4c22-b0f3-aeb9f468865b ("metrics-server-9975d5f86-s95qr_kube-system(8ce729fc-f814-4c22-b0f3-aeb9f468865b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0620 18:51:16.735310  474995 logs.go:138] Found kubelet problem: Jun 20 18:49:16 old-k8s-version-337794 kubelet[665]: E0620 18:49:16.506224     665 pod_workers.go:191] Error syncing pod 8ce729fc-f814-4c22-b0f3-aeb9f468865b ("metrics-server-9975d5f86-s95qr_kube-system(8ce729fc-f814-4c22-b0f3-aeb9f468865b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0620 18:51:16.735639  474995 logs.go:138] Found kubelet problem: Jun 20 18:49:17 old-k8s-version-337794 kubelet[665]: E0620 18:49:17.502948     665 pod_workers.go:191] Error syncing pod be9a2daf-d4c5-4163-815e-337ffd00fcb7 ("dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"
	W0620 18:51:16.735965  474995 logs.go:138] Found kubelet problem: Jun 20 18:49:28 old-k8s-version-337794 kubelet[665]: E0620 18:49:28.505091     665 pod_workers.go:191] Error syncing pod be9a2daf-d4c5-4163-815e-337ffd00fcb7 ("dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"
	W0620 18:51:16.736154  474995 logs.go:138] Found kubelet problem: Jun 20 18:49:29 old-k8s-version-337794 kubelet[665]: E0620 18:49:29.503220     665 pod_workers.go:191] Error syncing pod 8ce729fc-f814-4c22-b0f3-aeb9f468865b ("metrics-server-9975d5f86-s95qr_kube-system(8ce729fc-f814-4c22-b0f3-aeb9f468865b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0620 18:51:16.736482  474995 logs.go:138] Found kubelet problem: Jun 20 18:49:39 old-k8s-version-337794 kubelet[665]: E0620 18:49:39.503259     665 pod_workers.go:191] Error syncing pod be9a2daf-d4c5-4163-815e-337ffd00fcb7 ("dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"
	W0620 18:51:16.736666  474995 logs.go:138] Found kubelet problem: Jun 20 18:49:44 old-k8s-version-337794 kubelet[665]: E0620 18:49:44.503505     665 pod_workers.go:191] Error syncing pod 8ce729fc-f814-4c22-b0f3-aeb9f468865b ("metrics-server-9975d5f86-s95qr_kube-system(8ce729fc-f814-4c22-b0f3-aeb9f468865b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0620 18:51:16.736992  474995 logs.go:138] Found kubelet problem: Jun 20 18:49:52 old-k8s-version-337794 kubelet[665]: E0620 18:49:52.504696     665 pod_workers.go:191] Error syncing pod be9a2daf-d4c5-4163-815e-337ffd00fcb7 ("dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"
	W0620 18:51:16.737176  474995 logs.go:138] Found kubelet problem: Jun 20 18:49:57 old-k8s-version-337794 kubelet[665]: E0620 18:49:57.503168     665 pod_workers.go:191] Error syncing pod 8ce729fc-f814-4c22-b0f3-aeb9f468865b ("metrics-server-9975d5f86-s95qr_kube-system(8ce729fc-f814-4c22-b0f3-aeb9f468865b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0620 18:51:16.737508  474995 logs.go:138] Found kubelet problem: Jun 20 18:50:04 old-k8s-version-337794 kubelet[665]: E0620 18:50:04.503416     665 pod_workers.go:191] Error syncing pod be9a2daf-d4c5-4163-815e-337ffd00fcb7 ("dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"
	W0620 18:51:16.737691  474995 logs.go:138] Found kubelet problem: Jun 20 18:50:08 old-k8s-version-337794 kubelet[665]: E0620 18:50:08.504863     665 pod_workers.go:191] Error syncing pod 8ce729fc-f814-4c22-b0f3-aeb9f468865b ("metrics-server-9975d5f86-s95qr_kube-system(8ce729fc-f814-4c22-b0f3-aeb9f468865b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0620 18:51:16.738018  474995 logs.go:138] Found kubelet problem: Jun 20 18:50:16 old-k8s-version-337794 kubelet[665]: E0620 18:50:16.503299     665 pod_workers.go:191] Error syncing pod be9a2daf-d4c5-4163-815e-337ffd00fcb7 ("dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"
	W0620 18:51:16.738204  474995 logs.go:138] Found kubelet problem: Jun 20 18:50:19 old-k8s-version-337794 kubelet[665]: E0620 18:50:19.503499     665 pod_workers.go:191] Error syncing pod 8ce729fc-f814-4c22-b0f3-aeb9f468865b ("metrics-server-9975d5f86-s95qr_kube-system(8ce729fc-f814-4c22-b0f3-aeb9f468865b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0620 18:51:16.738532  474995 logs.go:138] Found kubelet problem: Jun 20 18:50:30 old-k8s-version-337794 kubelet[665]: E0620 18:50:30.503459     665 pod_workers.go:191] Error syncing pod be9a2daf-d4c5-4163-815e-337ffd00fcb7 ("dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"
	W0620 18:51:16.738717  474995 logs.go:138] Found kubelet problem: Jun 20 18:50:34 old-k8s-version-337794 kubelet[665]: E0620 18:50:34.504128     665 pod_workers.go:191] Error syncing pod 8ce729fc-f814-4c22-b0f3-aeb9f468865b ("metrics-server-9975d5f86-s95qr_kube-system(8ce729fc-f814-4c22-b0f3-aeb9f468865b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0620 18:51:16.739053  474995 logs.go:138] Found kubelet problem: Jun 20 18:50:44 old-k8s-version-337794 kubelet[665]: E0620 18:50:44.503802     665 pod_workers.go:191] Error syncing pod be9a2daf-d4c5-4163-815e-337ffd00fcb7 ("dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"
	W0620 18:51:16.739240  474995 logs.go:138] Found kubelet problem: Jun 20 18:50:48 old-k8s-version-337794 kubelet[665]: E0620 18:50:48.503323     665 pod_workers.go:191] Error syncing pod 8ce729fc-f814-4c22-b0f3-aeb9f468865b ("metrics-server-9975d5f86-s95qr_kube-system(8ce729fc-f814-4c22-b0f3-aeb9f468865b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0620 18:51:16.739566  474995 logs.go:138] Found kubelet problem: Jun 20 18:50:55 old-k8s-version-337794 kubelet[665]: E0620 18:50:55.503245     665 pod_workers.go:191] Error syncing pod be9a2daf-d4c5-4163-815e-337ffd00fcb7 ("dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"
	W0620 18:51:16.739752  474995 logs.go:138] Found kubelet problem: Jun 20 18:51:03 old-k8s-version-337794 kubelet[665]: E0620 18:51:03.503156     665 pod_workers.go:191] Error syncing pod 8ce729fc-f814-4c22-b0f3-aeb9f468865b ("metrics-server-9975d5f86-s95qr_kube-system(8ce729fc-f814-4c22-b0f3-aeb9f468865b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0620 18:51:16.740079  474995 logs.go:138] Found kubelet problem: Jun 20 18:51:08 old-k8s-version-337794 kubelet[665]: E0620 18:51:08.506428     665 pod_workers.go:191] Error syncing pod be9a2daf-d4c5-4163-815e-337ffd00fcb7 ("dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"
	I0620 18:51:16.740090  474995 logs.go:123] Gathering logs for dmesg ...
	I0620 18:51:16.740104  474995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0620 18:51:16.769562  474995 logs.go:123] Gathering logs for kube-apiserver [cfdcf7bafc98db0ed8b117a7f81af17f4aa0ffe0d900af8fb469323b24ada207] ...
	I0620 18:51:16.769593  474995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cfdcf7bafc98db0ed8b117a7f81af17f4aa0ffe0d900af8fb469323b24ada207"
	I0620 18:51:16.863581  474995 logs.go:123] Gathering logs for coredns [3b0caff456f550681c8dc6f7ce0c9af69317ec12ac39d8a6e1fa2040ea4c8a50] ...
	I0620 18:51:16.863616  474995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b0caff456f550681c8dc6f7ce0c9af69317ec12ac39d8a6e1fa2040ea4c8a50"
	I0620 18:51:16.908011  474995 logs.go:123] Gathering logs for container status ...
	I0620 18:51:16.908041  474995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0620 18:51:16.960275  474995 logs.go:123] Gathering logs for describe nodes ...
	I0620 18:51:16.960331  474995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0620 18:51:17.133154  474995 logs.go:123] Gathering logs for kube-apiserver [ab9a5dc87140998b57eded669bdfe78785f54034fe3076c3679faee0c5e8b168] ...
	I0620 18:51:17.133189  474995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab9a5dc87140998b57eded669bdfe78785f54034fe3076c3679faee0c5e8b168"
	I0620 18:51:17.196301  474995 logs.go:123] Gathering logs for kube-scheduler [e895300ffd7e6f784d9567baacfb9b233853b31245990e391675289e62ad7659] ...
	I0620 18:51:17.196379  474995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e895300ffd7e6f784d9567baacfb9b233853b31245990e391675289e62ad7659"
	I0620 18:51:17.247573  474995 logs.go:123] Gathering logs for kindnet [13a0bb253788862ae22e6058b7f190fba11dd8db1d86a9df13f8b2aeb42fcecb] ...
	I0620 18:51:17.247606  474995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13a0bb253788862ae22e6058b7f190fba11dd8db1d86a9df13f8b2aeb42fcecb"
	I0620 18:51:17.288206  474995 logs.go:123] Gathering logs for kindnet [e098983ce65fd9a05371c1e9237d75438fa40f748e0a679589f472f0a39a08e8] ...
	I0620 18:51:17.288236  474995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e098983ce65fd9a05371c1e9237d75438fa40f748e0a679589f472f0a39a08e8"
	I0620 18:51:17.334130  474995 logs.go:123] Gathering logs for kubernetes-dashboard [9b335eeb8b67e4c38ce7a7e9a7bc9f20eb7a66185e54a8a71a8ecb76c3a9aa5f] ...
	I0620 18:51:17.334171  474995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9b335eeb8b67e4c38ce7a7e9a7bc9f20eb7a66185e54a8a71a8ecb76c3a9aa5f"
	I0620 18:51:17.377828  474995 logs.go:123] Gathering logs for kube-scheduler [216bb91d7edcfdb7e26e452ac2980c11fb6d169bc9ec8ff9d4125cefc89b4af8] ...
	I0620 18:51:17.377858  474995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 216bb91d7edcfdb7e26e452ac2980c11fb6d169bc9ec8ff9d4125cefc89b4af8"
	I0620 18:51:17.422741  474995 logs.go:123] Gathering logs for kube-proxy [fd105e227c64f742488149d7a7ef12f65582c9a17ebf40af0d31772fff23b850] ...
	I0620 18:51:17.422816  474995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd105e227c64f742488149d7a7ef12f65582c9a17ebf40af0d31772fff23b850"
	I0620 18:51:17.464495  474995 logs.go:123] Gathering logs for kube-proxy [0084076706589b3159aa05b37a0ef12c102644140788b191724b0f7267fd4e3c] ...
	I0620 18:51:17.464523  474995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0084076706589b3159aa05b37a0ef12c102644140788b191724b0f7267fd4e3c"
	I0620 18:51:17.502232  474995 logs.go:123] Gathering logs for kube-controller-manager [ae831abf0d9b3aec7fd99a96e612f60ca28994573225480bd39d669115fd9323] ...
	I0620 18:51:17.502300  474995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae831abf0d9b3aec7fd99a96e612f60ca28994573225480bd39d669115fd9323"
	I0620 18:51:17.572024  474995 out.go:304] Setting ErrFile to fd 2...
	I0620 18:51:17.572055  474995 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0620 18:51:17.572109  474995 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0620 18:51:17.572123  474995 out.go:239]   Jun 20 18:50:44 old-k8s-version-337794 kubelet[665]: E0620 18:50:44.503802     665 pod_workers.go:191] Error syncing pod be9a2daf-d4c5-4163-815e-337ffd00fcb7 ("dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"
	  Jun 20 18:50:44 old-k8s-version-337794 kubelet[665]: E0620 18:50:44.503802     665 pod_workers.go:191] Error syncing pod be9a2daf-d4c5-4163-815e-337ffd00fcb7 ("dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"
	W0620 18:51:17.572131  474995 out.go:239]   Jun 20 18:50:48 old-k8s-version-337794 kubelet[665]: E0620 18:50:48.503323     665 pod_workers.go:191] Error syncing pod 8ce729fc-f814-4c22-b0f3-aeb9f468865b ("metrics-server-9975d5f86-s95qr_kube-system(8ce729fc-f814-4c22-b0f3-aeb9f468865b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Jun 20 18:50:48 old-k8s-version-337794 kubelet[665]: E0620 18:50:48.503323     665 pod_workers.go:191] Error syncing pod 8ce729fc-f814-4c22-b0f3-aeb9f468865b ("metrics-server-9975d5f86-s95qr_kube-system(8ce729fc-f814-4c22-b0f3-aeb9f468865b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0620 18:51:17.572147  474995 out.go:239]   Jun 20 18:50:55 old-k8s-version-337794 kubelet[665]: E0620 18:50:55.503245     665 pod_workers.go:191] Error syncing pod be9a2daf-d4c5-4163-815e-337ffd00fcb7 ("dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"
	  Jun 20 18:50:55 old-k8s-version-337794 kubelet[665]: E0620 18:50:55.503245     665 pod_workers.go:191] Error syncing pod be9a2daf-d4c5-4163-815e-337ffd00fcb7 ("dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"
	W0620 18:51:17.572154  474995 out.go:239]   Jun 20 18:51:03 old-k8s-version-337794 kubelet[665]: E0620 18:51:03.503156     665 pod_workers.go:191] Error syncing pod 8ce729fc-f814-4c22-b0f3-aeb9f468865b ("metrics-server-9975d5f86-s95qr_kube-system(8ce729fc-f814-4c22-b0f3-aeb9f468865b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Jun 20 18:51:03 old-k8s-version-337794 kubelet[665]: E0620 18:51:03.503156     665 pod_workers.go:191] Error syncing pod 8ce729fc-f814-4c22-b0f3-aeb9f468865b ("metrics-server-9975d5f86-s95qr_kube-system(8ce729fc-f814-4c22-b0f3-aeb9f468865b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0620 18:51:17.572166  474995 out.go:239]   Jun 20 18:51:08 old-k8s-version-337794 kubelet[665]: E0620 18:51:08.506428     665 pod_workers.go:191] Error syncing pod be9a2daf-d4c5-4163-815e-337ffd00fcb7 ("dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"
	  Jun 20 18:51:08 old-k8s-version-337794 kubelet[665]: E0620 18:51:08.506428     665 pod_workers.go:191] Error syncing pod be9a2daf-d4c5-4163-815e-337ffd00fcb7 ("dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"
	I0620 18:51:17.572172  474995 out.go:304] Setting ErrFile to fd 2...
	I0620 18:51:17.572181  474995 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0620 18:51:27.573431  474995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0620 18:51:27.584997  474995 api_server.go:72] duration metric: took 6m1.155713275s to wait for apiserver process to appear ...
	I0620 18:51:27.585025  474995 api_server.go:88] waiting for apiserver healthz status ...
	I0620 18:51:27.587901  474995 out.go:177] 
	W0620 18:51:27.590132  474995 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: cluster wait timed out during healthz check
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: cluster wait timed out during healthz check
	W0620 18:51:27.590150  474995 out.go:239] * 
	* 
	W0620 18:51:27.591168  474995 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0620 18:51:27.593949  474995 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-337794 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-337794
helpers_test.go:235: (dbg) docker inspect old-k8s-version-337794:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "07c095f8e5a58ad09e6b17489b2c7b66f515e4f022d790666bcc0ee17aa0af52",
	        "Created": "2024-06-20T18:42:06.152945976Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 475206,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-06-20T18:45:19.292882219Z",
	            "FinishedAt": "2024-06-20T18:45:17.845802459Z"
	        },
	        "Image": "sha256:d01e921d87b5c98766e198911bba95096a87baa7b20caabee6d66ddda3a30e16",
	        "ResolvConfPath": "/var/lib/docker/containers/07c095f8e5a58ad09e6b17489b2c7b66f515e4f022d790666bcc0ee17aa0af52/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/07c095f8e5a58ad09e6b17489b2c7b66f515e4f022d790666bcc0ee17aa0af52/hostname",
	        "HostsPath": "/var/lib/docker/containers/07c095f8e5a58ad09e6b17489b2c7b66f515e4f022d790666bcc0ee17aa0af52/hosts",
	        "LogPath": "/var/lib/docker/containers/07c095f8e5a58ad09e6b17489b2c7b66f515e4f022d790666bcc0ee17aa0af52/07c095f8e5a58ad09e6b17489b2c7b66f515e4f022d790666bcc0ee17aa0af52-json.log",
	        "Name": "/old-k8s-version-337794",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-337794:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-337794",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/ea1644ea9e435013dd3e98c18a991c556dbdcafe1ef1ee72f146fc45ad3fade7-init/diff:/var/lib/docker/overlay2/2993e2c9fcbb886b1475733978fee74bf42199db877e5d5079a8d8df185eaf52/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ea1644ea9e435013dd3e98c18a991c556dbdcafe1ef1ee72f146fc45ad3fade7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ea1644ea9e435013dd3e98c18a991c556dbdcafe1ef1ee72f146fc45ad3fade7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ea1644ea9e435013dd3e98c18a991c556dbdcafe1ef1ee72f146fc45ad3fade7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-337794",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-337794/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-337794",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718753665-19106@sha256:735aacbd61d487240dc39ba6e4d70dd6ae1ad6181ca2ba092d372605e48ee636",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-337794",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-337794",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8c7947bd1fa0ef12df33af38ad5219cd95a58bae1af7f693fce105c81e468d2d",
	            "SandboxKey": "/var/run/docker/netns/8c7947bd1fa0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33440"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33439"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33436"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33438"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33437"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-337794": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "NetworkID": "92e6ca7ed17d8e6a4efd418b04cf0776bb3d238dbfc197e69e6160e4182782dc",
	                    "EndpointID": "de59302f094e8ce009c68fcdd92387ae8c56e2acc84b99c96fa217d7878cae07",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-337794",
	                        "07c095f8e5a5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-337794 -n old-k8s-version-337794
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-337794 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-337794 logs -n 25: (2.049609581s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-355113 sudo find                             | cilium-355113             | jenkins | v1.33.1 | 20 Jun 24 18:40 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                          |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                           |         |         |                     |                     |
	| ssh     | -p cilium-355113 sudo crio                             | cilium-355113             | jenkins | v1.33.1 | 20 Jun 24 18:40 UTC |                     |
	|         | config                                                 |                           |         |         |                     |                     |
	| delete  | -p cilium-355113                                       | cilium-355113             | jenkins | v1.33.1 | 20 Jun 24 18:40 UTC | 20 Jun 24 18:40 UTC |
	| start   | -p force-systemd-env-380218                            | force-systemd-env-380218  | jenkins | v1.33.1 | 20 Jun 24 18:40 UTC | 20 Jun 24 18:41 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | -v=5 --driver=docker                                   |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-939130                              | force-systemd-flag-939130 | jenkins | v1.33.1 | 20 Jun 24 18:41 UTC | 20 Jun 24 18:41 UTC |
	|         | ssh cat                                                |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-939130                           | force-systemd-flag-939130 | jenkins | v1.33.1 | 20 Jun 24 18:41 UTC | 20 Jun 24 18:41 UTC |
	| start   | -p cert-expiration-611852                              | cert-expiration-611852    | jenkins | v1.33.1 | 20 Jun 24 18:41 UTC | 20 Jun 24 18:41 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                                   |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-380218                               | force-systemd-env-380218  | jenkins | v1.33.1 | 20 Jun 24 18:41 UTC | 20 Jun 24 18:41 UTC |
	|         | ssh cat                                                |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-380218                            | force-systemd-env-380218  | jenkins | v1.33.1 | 20 Jun 24 18:41 UTC | 20 Jun 24 18:41 UTC |
	| start   | -p cert-options-344744                                 | cert-options-344744       | jenkins | v1.33.1 | 20 Jun 24 18:41 UTC | 20 Jun 24 18:41 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost                            |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                                  |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| ssh     | cert-options-344744 ssh                                | cert-options-344744       | jenkins | v1.33.1 | 20 Jun 24 18:41 UTC | 20 Jun 24 18:41 UTC |
	|         | openssl x509 -text -noout -in                          |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                           |         |         |                     |                     |
	| ssh     | -p cert-options-344744 -- sudo                         | cert-options-344744       | jenkins | v1.33.1 | 20 Jun 24 18:41 UTC | 20 Jun 24 18:41 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                           |         |         |                     |                     |
	| delete  | -p cert-options-344744                                 | cert-options-344744       | jenkins | v1.33.1 | 20 Jun 24 18:41 UTC | 20 Jun 24 18:41 UTC |
	| start   | -p old-k8s-version-337794                              | old-k8s-version-337794    | jenkins | v1.33.1 | 20 Jun 24 18:41 UTC | 20 Jun 24 18:44 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --kvm-network=default                                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                           |         |         |                     |                     |
	|         | --keep-context=false                                   |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                           |         |         |                     |                     |
	| start   | -p cert-expiration-611852                              | cert-expiration-611852    | jenkins | v1.33.1 | 20 Jun 24 18:44 UTC | 20 Jun 24 18:44 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-611852                              | cert-expiration-611852    | jenkins | v1.33.1 | 20 Jun 24 18:44 UTC | 20 Jun 24 18:44 UTC |
	| start   | -p no-preload-530880                                   | no-preload-530880         | jenkins | v1.33.1 | 20 Jun 24 18:44 UTC | 20 Jun 24 18:46 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-337794        | old-k8s-version-337794    | jenkins | v1.33.1 | 20 Jun 24 18:45 UTC | 20 Jun 24 18:45 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| stop    | -p old-k8s-version-337794                              | old-k8s-version-337794    | jenkins | v1.33.1 | 20 Jun 24 18:45 UTC | 20 Jun 24 18:45 UTC |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-337794             | old-k8s-version-337794    | jenkins | v1.33.1 | 20 Jun 24 18:45 UTC | 20 Jun 24 18:45 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p old-k8s-version-337794                              | old-k8s-version-337794    | jenkins | v1.33.1 | 20 Jun 24 18:45 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --kvm-network=default                                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                           |         |         |                     |                     |
	|         | --keep-context=false                                   |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-530880             | no-preload-530880         | jenkins | v1.33.1 | 20 Jun 24 18:46 UTC | 20 Jun 24 18:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| stop    | -p no-preload-530880                                   | no-preload-530880         | jenkins | v1.33.1 | 20 Jun 24 18:46 UTC | 20 Jun 24 18:46 UTC |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-530880                  | no-preload-530880         | jenkins | v1.33.1 | 20 Jun 24 18:46 UTC | 20 Jun 24 18:46 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p no-preload-530880                                   | no-preload-530880         | jenkins | v1.33.1 | 20 Jun 24 18:46 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                           |         |         |                     |                     |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/20 18:46:33
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0620 18:46:33.490725  480164 out.go:291] Setting OutFile to fd 1 ...
	I0620 18:46:33.490846  480164 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0620 18:46:33.490899  480164 out.go:304] Setting ErrFile to fd 2...
	I0620 18:46:33.490905  480164 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0620 18:46:33.491173  480164 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19106-274269/.minikube/bin
	I0620 18:46:33.491534  480164 out.go:298] Setting JSON to false
	I0620 18:46:33.492578  480164 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8944,"bootTime":1718900250,"procs":229,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1063-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0620 18:46:33.492648  480164 start.go:139] virtualization:  
	I0620 18:46:33.496049  480164 out.go:177] * [no-preload-530880] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0620 18:46:33.499172  480164 out.go:177]   - MINIKUBE_LOCATION=19106
	I0620 18:46:33.499246  480164 notify.go:220] Checking for updates...
	I0620 18:46:33.504215  480164 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0620 18:46:33.506536  480164 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19106-274269/kubeconfig
	I0620 18:46:33.509413  480164 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19106-274269/.minikube
	I0620 18:46:33.511708  480164 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0620 18:46:33.514320  480164 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0620 18:46:33.517049  480164 config.go:182] Loaded profile config "no-preload-530880": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.2
	I0620 18:46:33.517619  480164 driver.go:392] Setting default libvirt URI to qemu:///system
	I0620 18:46:33.552466  480164 docker.go:122] docker version: linux-26.1.4:Docker Engine - Community
	I0620 18:46:33.552585  480164 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0620 18:46:33.617193  480164 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:52 SystemTime:2024-06-20 18:46:33.605083715 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1063-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214892544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d2d58213f83a351ca8f528a95fbd145f5654e957 Expected:d2d58213f83a351ca8f528a95fbd145f5654e957} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.1]] Warnings:<nil>}}
	I0620 18:46:33.617298  480164 docker.go:295] overlay module found
	I0620 18:46:33.624264  480164 out.go:177] * Using the docker driver based on existing profile
	I0620 18:46:33.626600  480164 start.go:297] selected driver: docker
	I0620 18:46:33.626624  480164 start.go:901] validating driver "docker" against &{Name:no-preload-530880 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718753665-19106@sha256:735aacbd61d487240dc39ba6e4d70dd6ae1ad6181ca2ba092d372605e48ee636 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:no-preload-530880 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false Mount
String:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0620 18:46:33.626755  480164 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0620 18:46:33.627460  480164 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0620 18:46:33.684092  480164 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-06-20 18:46:33.674726668 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1063-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214892544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d2d58213f83a351ca8f528a95fbd145f5654e957 Expected:d2d58213f83a351ca8f528a95fbd145f5654e957} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.1]] Warnings:<nil>}}
	I0620 18:46:33.684450  480164 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0620 18:46:33.684488  480164 cni.go:84] Creating CNI manager for ""
	I0620 18:46:33.684500  480164 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0620 18:46:33.684547  480164 start.go:340] cluster config:
	{Name:no-preload-530880 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718753665-19106@sha256:735aacbd61d487240dc39ba6e4d70dd6ae1ad6181ca2ba092d372605e48ee636 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:no-preload-530880 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0620 18:46:33.687231  480164 out.go:177] * Starting "no-preload-530880" primary control-plane node in "no-preload-530880" cluster
	I0620 18:46:33.689201  480164 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0620 18:46:33.691495  480164 out.go:177] * Pulling base image v0.0.44-1718753665-19106 ...
	I0620 18:46:33.693432  480164 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime containerd
	I0620 18:46:33.693515  480164 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718753665-19106@sha256:735aacbd61d487240dc39ba6e4d70dd6ae1ad6181ca2ba092d372605e48ee636 in local docker daemon
	I0620 18:46:33.693567  480164 profile.go:143] Saving config to /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/no-preload-530880/config.json ...
	I0620 18:46:33.693952  480164 cache.go:107] acquiring lock: {Name:mk0d30e61b84d22570222629885cbcbb198842a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0620 18:46:33.694041  480164 cache.go:115] /home/jenkins/minikube-integration/19106-274269/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0620 18:46:33.694055  480164 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/19106-274269/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 109.283µs
	I0620 18:46:33.694069  480164 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/19106-274269/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0620 18:46:33.694086  480164 cache.go:107] acquiring lock: {Name:mk2b6518de233991eac6b3726d7d5474797ed542 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0620 18:46:33.694141  480164 cache.go:115] /home/jenkins/minikube-integration/19106-274269/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.2 exists
	I0620 18:46:33.694152  480164 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.30.2" -> "/home/jenkins/minikube-integration/19106-274269/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.2" took 67.421µs
	I0620 18:46:33.694158  480164 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.30.2 -> /home/jenkins/minikube-integration/19106-274269/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.2 succeeded
	I0620 18:46:33.694175  480164 cache.go:107] acquiring lock: {Name:mk9b170930ba97a2495274351d787a6cba9a4290 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0620 18:46:33.694207  480164 cache.go:115] /home/jenkins/minikube-integration/19106-274269/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.2 exists
	I0620 18:46:33.694217  480164 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.30.2" -> "/home/jenkins/minikube-integration/19106-274269/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.2" took 42.822µs
	I0620 18:46:33.694224  480164 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.30.2 -> /home/jenkins/minikube-integration/19106-274269/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.2 succeeded
	I0620 18:46:33.694233  480164 cache.go:107] acquiring lock: {Name:mk20b151646a3bf20df9b39a71e48bec43cc136a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0620 18:46:33.694321  480164 cache.go:115] /home/jenkins/minikube-integration/19106-274269/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.2 exists
	I0620 18:46:33.694343  480164 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.30.2" -> "/home/jenkins/minikube-integration/19106-274269/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.2" took 103.737µs
	I0620 18:46:33.694355  480164 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.30.2 -> /home/jenkins/minikube-integration/19106-274269/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.2 succeeded
	I0620 18:46:33.694456  480164 cache.go:107] acquiring lock: {Name:mk184076b7cfe456ca6129a3cc92062d1b42f4f4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0620 18:46:33.694509  480164 cache.go:115] /home/jenkins/minikube-integration/19106-274269/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 exists
	I0620 18:46:33.694516  480164 cache.go:96] cache image "registry.k8s.io/etcd:3.5.12-0" -> "/home/jenkins/minikube-integration/19106-274269/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0" took 63.573µs
	I0620 18:46:33.694522  480164 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.12-0 -> /home/jenkins/minikube-integration/19106-274269/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 succeeded
	I0620 18:46:33.694533  480164 cache.go:107] acquiring lock: {Name:mkfa7757be0cad66a8cdd343f67de97fca769659 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0620 18:46:33.694558  480164 cache.go:115] /home/jenkins/minikube-integration/19106-274269/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0620 18:46:33.694562  480164 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/home/jenkins/minikube-integration/19106-274269/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 31.417µs
	I0620 18:46:33.694568  480164 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /home/jenkins/minikube-integration/19106-274269/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0620 18:46:33.694578  480164 cache.go:107] acquiring lock: {Name:mkbbeaf42359c6f24290cadbc25b27662bc2c551 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0620 18:46:33.694616  480164 cache.go:115] /home/jenkins/minikube-integration/19106-274269/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0620 18:46:33.694621  480164 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/home/jenkins/minikube-integration/19106-274269/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 44.742µs
	I0620 18:46:33.694627  480164 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /home/jenkins/minikube-integration/19106-274269/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0620 18:46:33.694369  480164 cache.go:107] acquiring lock: {Name:mk6cf8a2668fc6ccd2d2a73f5b4eb6bcb4fba778 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0620 18:46:33.694679  480164 cache.go:115] /home/jenkins/minikube-integration/19106-274269/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.2 exists
	I0620 18:46:33.694684  480164 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.30.2" -> "/home/jenkins/minikube-integration/19106-274269/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.2" took 317.373µs
	I0620 18:46:33.694692  480164 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.30.2 -> /home/jenkins/minikube-integration/19106-274269/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.2 succeeded
	I0620 18:46:33.694697  480164 cache.go:87] Successfully saved all images to host disk.
	I0620 18:46:33.719987  480164 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718753665-19106@sha256:735aacbd61d487240dc39ba6e4d70dd6ae1ad6181ca2ba092d372605e48ee636 in local docker daemon, skipping pull
	I0620 18:46:33.720013  480164 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718753665-19106@sha256:735aacbd61d487240dc39ba6e4d70dd6ae1ad6181ca2ba092d372605e48ee636 exists in daemon, skipping load
	I0620 18:46:33.720032  480164 cache.go:194] Successfully downloaded all kic artifacts
	I0620 18:46:33.720076  480164 start.go:360] acquireMachinesLock for no-preload-530880: {Name:mka2fd0eca3baf5309687b5fae7f681d55ff2b64 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0620 18:46:33.720148  480164 start.go:364] duration metric: took 48.607µs to acquireMachinesLock for "no-preload-530880"
	I0620 18:46:33.720176  480164 start.go:96] Skipping create...Using existing machine configuration
	I0620 18:46:33.720182  480164 fix.go:54] fixHost starting: 
	I0620 18:46:33.720457  480164 cli_runner.go:164] Run: docker container inspect no-preload-530880 --format={{.State.Status}}
	I0620 18:46:33.736888  480164 fix.go:112] recreateIfNeeded on no-preload-530880: state=Stopped err=<nil>
	W0620 18:46:33.736919  480164 fix.go:138] unexpected machine state, will restart: <nil>
	I0620 18:46:33.739513  480164 out.go:177] * Restarting existing docker container for "no-preload-530880" ...
	I0620 18:46:34.849525  474995 pod_ready.go:102] pod "etcd-old-k8s-version-337794" in "kube-system" namespace has status "Ready":"False"
	I0620 18:46:37.348072  474995 pod_ready.go:102] pod "etcd-old-k8s-version-337794" in "kube-system" namespace has status "Ready":"False"
	I0620 18:46:33.741787  480164 cli_runner.go:164] Run: docker start no-preload-530880
	I0620 18:46:34.074101  480164 cli_runner.go:164] Run: docker container inspect no-preload-530880 --format={{.State.Status}}
	I0620 18:46:34.096591  480164 kic.go:430] container "no-preload-530880" state is running.
	I0620 18:46:34.096984  480164 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-530880
	I0620 18:46:34.120890  480164 profile.go:143] Saving config to /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/no-preload-530880/config.json ...
	I0620 18:46:34.121127  480164 machine.go:94] provisionDockerMachine start ...
	I0620 18:46:34.121197  480164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-530880
	I0620 18:46:34.144719  480164 main.go:141] libmachine: Using SSH client type: native
	I0620 18:46:34.144970  480164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2bc0] 0x3e5420 <nil>  [] 0s} 127.0.0.1 33445 <nil> <nil>}
	I0620 18:46:34.144979  480164 main.go:141] libmachine: About to run SSH command:
	hostname
	I0620 18:46:34.145687  480164 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34414->127.0.0.1:33445: read: connection reset by peer
	I0620 18:46:37.282466  480164 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-530880
	
	I0620 18:46:37.282491  480164 ubuntu.go:169] provisioning hostname "no-preload-530880"
	I0620 18:46:37.282554  480164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-530880
	I0620 18:46:37.308056  480164 main.go:141] libmachine: Using SSH client type: native
	I0620 18:46:37.308305  480164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2bc0] 0x3e5420 <nil>  [] 0s} 127.0.0.1 33445 <nil> <nil>}
	I0620 18:46:37.308325  480164 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-530880 && echo "no-preload-530880" | sudo tee /etc/hostname
	I0620 18:46:37.455295  480164 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-530880
	
	I0620 18:46:37.455386  480164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-530880
	I0620 18:46:37.487272  480164 main.go:141] libmachine: Using SSH client type: native
	I0620 18:46:37.487537  480164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2bc0] 0x3e5420 <nil>  [] 0s} 127.0.0.1 33445 <nil> <nil>}
	I0620 18:46:37.487559  480164 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-530880' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-530880/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-530880' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0620 18:46:37.619114  480164 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0620 18:46:37.619142  480164 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19106-274269/.minikube CaCertPath:/home/jenkins/minikube-integration/19106-274269/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19106-274269/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19106-274269/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19106-274269/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19106-274269/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19106-274269/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19106-274269/.minikube}
	I0620 18:46:37.619170  480164 ubuntu.go:177] setting up certificates
	I0620 18:46:37.619181  480164 provision.go:84] configureAuth start
	I0620 18:46:37.619245  480164 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-530880
	I0620 18:46:37.636563  480164 provision.go:143] copyHostCerts
	I0620 18:46:37.636644  480164 exec_runner.go:144] found /home/jenkins/minikube-integration/19106-274269/.minikube/ca.pem, removing ...
	I0620 18:46:37.636658  480164 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19106-274269/.minikube/ca.pem
	I0620 18:46:37.636736  480164 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19106-274269/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19106-274269/.minikube/ca.pem (1082 bytes)
	I0620 18:46:37.636849  480164 exec_runner.go:144] found /home/jenkins/minikube-integration/19106-274269/.minikube/cert.pem, removing ...
	I0620 18:46:37.636860  480164 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19106-274269/.minikube/cert.pem
	I0620 18:46:37.636900  480164 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19106-274269/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19106-274269/.minikube/cert.pem (1123 bytes)
	I0620 18:46:37.636972  480164 exec_runner.go:144] found /home/jenkins/minikube-integration/19106-274269/.minikube/key.pem, removing ...
	I0620 18:46:37.636982  480164 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19106-274269/.minikube/key.pem
	I0620 18:46:37.637011  480164 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19106-274269/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19106-274269/.minikube/key.pem (1679 bytes)
	I0620 18:46:37.637077  480164 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19106-274269/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19106-274269/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19106-274269/.minikube/certs/ca-key.pem org=jenkins.no-preload-530880 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-530880]
	I0620 18:46:38.175572  480164 provision.go:177] copyRemoteCerts
	I0620 18:46:38.175653  480164 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0620 18:46:38.175703  480164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-530880
	I0620 18:46:38.192225  480164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33445 SSHKeyPath:/home/jenkins/minikube-integration/19106-274269/.minikube/machines/no-preload-530880/id_rsa Username:docker}
	I0620 18:46:38.288223  480164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19106-274269/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0620 18:46:38.316898  480164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19106-274269/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0620 18:46:38.345108  480164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19106-274269/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0620 18:46:38.372122  480164 provision.go:87] duration metric: took 752.920941ms to configureAuth
	I0620 18:46:38.372151  480164 ubuntu.go:193] setting minikube options for container-runtime
	I0620 18:46:38.372347  480164 config.go:182] Loaded profile config "no-preload-530880": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.2
	I0620 18:46:38.372355  480164 machine.go:97] duration metric: took 4.251213457s to provisionDockerMachine
	I0620 18:46:38.372362  480164 start.go:293] postStartSetup for "no-preload-530880" (driver="docker")
	I0620 18:46:38.372373  480164 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0620 18:46:38.372425  480164 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0620 18:46:38.372477  480164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-530880
	I0620 18:46:38.388904  480164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33445 SSHKeyPath:/home/jenkins/minikube-integration/19106-274269/.minikube/machines/no-preload-530880/id_rsa Username:docker}
	I0620 18:46:38.484576  480164 ssh_runner.go:195] Run: cat /etc/os-release
	I0620 18:46:38.487753  480164 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0620 18:46:38.487831  480164 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0620 18:46:38.487848  480164 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0620 18:46:38.487856  480164 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0620 18:46:38.487866  480164 filesync.go:126] Scanning /home/jenkins/minikube-integration/19106-274269/.minikube/addons for local assets ...
	I0620 18:46:38.487923  480164 filesync.go:126] Scanning /home/jenkins/minikube-integration/19106-274269/.minikube/files for local assets ...
	I0620 18:46:38.488010  480164 filesync.go:149] local asset: /home/jenkins/minikube-integration/19106-274269/.minikube/files/etc/ssl/certs/2796712.pem -> 2796712.pem in /etc/ssl/certs
	I0620 18:46:38.488114  480164 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0620 18:46:38.496653  480164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19106-274269/.minikube/files/etc/ssl/certs/2796712.pem --> /etc/ssl/certs/2796712.pem (1708 bytes)
	I0620 18:46:38.527316  480164 start.go:296] duration metric: took 154.939003ms for postStartSetup
	I0620 18:46:38.527472  480164 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0620 18:46:38.527527  480164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-530880
	I0620 18:46:38.544222  480164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33445 SSHKeyPath:/home/jenkins/minikube-integration/19106-274269/.minikube/machines/no-preload-530880/id_rsa Username:docker}
	I0620 18:46:38.640630  480164 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0620 18:46:38.644993  480164 fix.go:56] duration metric: took 4.924803514s for fixHost
	I0620 18:46:38.645057  480164 start.go:83] releasing machines lock for "no-preload-530880", held for 4.924892284s
	I0620 18:46:38.645153  480164 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-530880
	I0620 18:46:38.661582  480164 ssh_runner.go:195] Run: cat /version.json
	I0620 18:46:38.661634  480164 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0620 18:46:38.661668  480164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-530880
	I0620 18:46:38.661711  480164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-530880
	I0620 18:46:38.678763  480164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33445 SSHKeyPath:/home/jenkins/minikube-integration/19106-274269/.minikube/machines/no-preload-530880/id_rsa Username:docker}
	I0620 18:46:38.680086  480164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33445 SSHKeyPath:/home/jenkins/minikube-integration/19106-274269/.minikube/machines/no-preload-530880/id_rsa Username:docker}
	I0620 18:46:38.915224  480164 ssh_runner.go:195] Run: systemctl --version
	I0620 18:46:38.919555  480164 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0620 18:46:38.923667  480164 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0620 18:46:38.940892  480164 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0620 18:46:38.940971  480164 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0620 18:46:38.949911  480164 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0620 18:46:38.949981  480164 start.go:494] detecting cgroup driver to use...
	I0620 18:46:38.950050  480164 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0620 18:46:38.950120  480164 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0620 18:46:38.964391  480164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0620 18:46:38.976172  480164 docker.go:217] disabling cri-docker service (if available) ...
	I0620 18:46:38.976276  480164 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0620 18:46:38.988928  480164 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0620 18:46:39.006342  480164 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0620 18:46:39.096627  480164 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0620 18:46:39.182570  480164 docker.go:233] disabling docker service ...
	I0620 18:46:39.182651  480164 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0620 18:46:39.196269  480164 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0620 18:46:39.208211  480164 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0620 18:46:39.288557  480164 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0620 18:46:39.388030  480164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0620 18:46:39.401190  480164 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0620 18:46:39.418170  480164 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0620 18:46:39.429124  480164 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0620 18:46:39.439237  480164 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0620 18:46:39.439314  480164 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0620 18:46:39.449590  480164 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0620 18:46:39.459756  480164 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0620 18:46:39.469984  480164 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0620 18:46:39.481560  480164 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0620 18:46:39.490698  480164 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0620 18:46:39.500990  480164 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0620 18:46:39.511183  480164 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0620 18:46:39.521381  480164 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0620 18:46:39.530423  480164 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0620 18:46:39.538556  480164 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0620 18:46:39.633360  480164 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0620 18:46:39.797844  480164 start.go:541] Will wait 60s for socket path /run/containerd/containerd.sock
	I0620 18:46:39.797930  480164 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0620 18:46:39.802224  480164 start.go:562] Will wait 60s for crictl version
	I0620 18:46:39.802291  480164 ssh_runner.go:195] Run: which crictl
	I0620 18:46:39.806173  480164 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0620 18:46:39.869030  480164 start.go:578] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.33
	RuntimeApiVersion:  v1
	I0620 18:46:39.869104  480164 ssh_runner.go:195] Run: containerd --version
	I0620 18:46:39.894033  480164 ssh_runner.go:195] Run: containerd --version
	I0620 18:46:39.917019  480164 out.go:177] * Preparing Kubernetes v1.30.2 on containerd 1.6.33 ...
	I0620 18:46:39.919529  480164 cli_runner.go:164] Run: docker network inspect no-preload-530880 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0620 18:46:39.934266  480164 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0620 18:46:39.937860  480164 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0620 18:46:39.948619  480164 kubeadm.go:877] updating cluster {Name:no-preload-530880 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718753665-19106@sha256:735aacbd61d487240dc39ba6e4d70dd6ae1ad6181ca2ba092d372605e48ee636 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:no-preload-530880 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenk
ins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0620 18:46:39.948743  480164 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime containerd
	I0620 18:46:39.948792  480164 ssh_runner.go:195] Run: sudo crictl images --output json
	I0620 18:46:39.987349  480164 containerd.go:627] all images are preloaded for containerd runtime.
	I0620 18:46:39.987375  480164 cache_images.go:84] Images are preloaded, skipping loading
	I0620 18:46:39.987384  480164 kubeadm.go:928] updating node { 192.168.76.2 8443 v1.30.2 containerd true true} ...
	I0620 18:46:39.987501  480164 kubeadm.go:940] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-530880 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:no-preload-530880 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0620 18:46:39.987570  480164 ssh_runner.go:195] Run: sudo crictl info
	I0620 18:46:40.039112  480164 cni.go:84] Creating CNI manager for ""
	I0620 18:46:40.039139  480164 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0620 18:46:40.039151  480164 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0620 18:46:40.039176  480164 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-530880 NodeName:no-preload-530880 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0620 18:46:40.039327  480164 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-530880"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0620 18:46:40.039474  480164 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0620 18:46:40.050357  480164 binaries.go:44] Found k8s binaries, skipping transfer
	I0620 18:46:40.050431  480164 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0620 18:46:40.059994  480164 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0620 18:46:40.079821  480164 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0620 18:46:40.105551  480164 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2171 bytes)
	I0620 18:46:40.125146  480164 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0620 18:46:40.129112  480164 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0620 18:46:40.141010  480164 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0620 18:46:40.227336  480164 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0620 18:46:40.243610  480164 certs.go:68] Setting up /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/no-preload-530880 for IP: 192.168.76.2
	I0620 18:46:40.243672  480164 certs.go:194] generating shared ca certs ...
	I0620 18:46:40.243704  480164 certs.go:226] acquiring lock for ca certs: {Name:mk8b11ba3bc5463026cd3822a512e17542776a35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0620 18:46:40.243869  480164 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19106-274269/.minikube/ca.key
	I0620 18:46:40.243947  480164 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19106-274269/.minikube/proxy-client-ca.key
	I0620 18:46:40.243973  480164 certs.go:256] generating profile certs ...
	I0620 18:46:40.244092  480164 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/no-preload-530880/client.key
	I0620 18:46:40.244192  480164 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/no-preload-530880/apiserver.key.7f771765
	I0620 18:46:40.244260  480164 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/no-preload-530880/proxy-client.key
	I0620 18:46:40.244391  480164 certs.go:484] found cert: /home/jenkins/minikube-integration/19106-274269/.minikube/certs/279671.pem (1338 bytes)
	W0620 18:46:40.244456  480164 certs.go:480] ignoring /home/jenkins/minikube-integration/19106-274269/.minikube/certs/279671_empty.pem, impossibly tiny 0 bytes
	I0620 18:46:40.244488  480164 certs.go:484] found cert: /home/jenkins/minikube-integration/19106-274269/.minikube/certs/ca-key.pem (1679 bytes)
	I0620 18:46:40.244538  480164 certs.go:484] found cert: /home/jenkins/minikube-integration/19106-274269/.minikube/certs/ca.pem (1082 bytes)
	I0620 18:46:40.244589  480164 certs.go:484] found cert: /home/jenkins/minikube-integration/19106-274269/.minikube/certs/cert.pem (1123 bytes)
	I0620 18:46:40.244633  480164 certs.go:484] found cert: /home/jenkins/minikube-integration/19106-274269/.minikube/certs/key.pem (1679 bytes)
	I0620 18:46:40.244706  480164 certs.go:484] found cert: /home/jenkins/minikube-integration/19106-274269/.minikube/files/etc/ssl/certs/2796712.pem (1708 bytes)
	I0620 18:46:40.245401  480164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19106-274269/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0620 18:46:40.274847  480164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19106-274269/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0620 18:46:40.315652  480164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19106-274269/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0620 18:46:40.345505  480164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19106-274269/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0620 18:46:40.385295  480164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/no-preload-530880/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0620 18:46:40.435048  480164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/no-preload-530880/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0620 18:46:40.466686  480164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/no-preload-530880/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0620 18:46:40.493441  480164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/no-preload-530880/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0620 18:46:40.533766  480164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19106-274269/.minikube/files/etc/ssl/certs/2796712.pem --> /usr/share/ca-certificates/2796712.pem (1708 bytes)
	I0620 18:46:40.566465  480164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19106-274269/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0620 18:46:40.591620  480164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19106-274269/.minikube/certs/279671.pem --> /usr/share/ca-certificates/279671.pem (1338 bytes)
	I0620 18:46:40.617002  480164 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0620 18:46:40.635850  480164 ssh_runner.go:195] Run: openssl version
	I0620 18:46:40.643218  480164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2796712.pem && ln -fs /usr/share/ca-certificates/2796712.pem /etc/ssl/certs/2796712.pem"
	I0620 18:46:40.654343  480164 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2796712.pem
	I0620 18:46:40.659107  480164 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 20 18:05 /usr/share/ca-certificates/2796712.pem
	I0620 18:46:40.659173  480164 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2796712.pem
	I0620 18:46:40.667049  480164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2796712.pem /etc/ssl/certs/3ec20f2e.0"
	I0620 18:46:40.678110  480164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0620 18:46:40.689556  480164 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0620 18:46:40.693253  480164 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 20 17:56 /usr/share/ca-certificates/minikubeCA.pem
	I0620 18:46:40.693323  480164 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0620 18:46:40.700706  480164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0620 18:46:40.709730  480164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/279671.pem && ln -fs /usr/share/ca-certificates/279671.pem /etc/ssl/certs/279671.pem"
	I0620 18:46:40.719704  480164 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/279671.pem
	I0620 18:46:40.723587  480164 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 20 18:05 /usr/share/ca-certificates/279671.pem
	I0620 18:46:40.723701  480164 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/279671.pem
	I0620 18:46:40.731316  480164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/279671.pem /etc/ssl/certs/51391683.0"
	I0620 18:46:40.740682  480164 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0620 18:46:40.744572  480164 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0620 18:46:40.751641  480164 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0620 18:46:40.758732  480164 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0620 18:46:40.765899  480164 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0620 18:46:40.772971  480164 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0620 18:46:40.780111  480164 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0620 18:46:40.786909  480164 kubeadm.go:391] StartCluster: {Name:no-preload-530880 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718753665-19106@sha256:735aacbd61d487240dc39ba6e4d70dd6ae1ad6181ca2ba092d372605e48ee636 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:no-preload-530880 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins
:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0620 18:46:40.787034  480164 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0620 18:46:40.787105  480164 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0620 18:46:40.824937  480164 cri.go:89] found id: "f76e543336bf1cd1f5b883f0e280d4e501b27ffbda9d7f18459dd1225151e79d"
	I0620 18:46:40.824961  480164 cri.go:89] found id: "3dae3d565289125ddcc1df2e658f60655cd8373cd01a1b71c4f6b7feeaf3b7ab"
	I0620 18:46:40.824966  480164 cri.go:89] found id: "103965af9bf00a199dfa6a9347c46b1150c03c1a71db856bf062638dce08d88d"
	I0620 18:46:40.824970  480164 cri.go:89] found id: "41587230eb876d5c499563d181931de04c95ecb7bdbe809d3e232e7d942a4eea"
	I0620 18:46:40.824975  480164 cri.go:89] found id: "5806db24329a0a8150b8f7815a0272af79e74d2760f6a31d23460e5328e7cb1b"
	I0620 18:46:40.824979  480164 cri.go:89] found id: "b518668f2794393f0d52a9522913041022eefae75705e048fb7ac0b731564f95"
	I0620 18:46:40.824983  480164 cri.go:89] found id: "32128302a97d35fac10fa7e033950a439ae38177c2719d53e457c20c630b1680"
	I0620 18:46:40.824986  480164 cri.go:89] found id: "cb011b11c097c5a2e23ed0fb6b90fc4948c0a623b247d45e79a18c80783a4708"
	I0620 18:46:40.824989  480164 cri.go:89] found id: ""
	I0620 18:46:40.825040  480164 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0620 18:46:40.839555  480164 cri.go:116] JSON = null
	W0620 18:46:40.839610  480164 kubeadm.go:398] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0620 18:46:40.839683  480164 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0620 18:46:40.865733  480164 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0620 18:46:40.865758  480164 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0620 18:46:40.865764  480164 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0620 18:46:40.865829  480164 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0620 18:46:40.876313  480164 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0620 18:46:40.877026  480164 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-530880" does not appear in /home/jenkins/minikube-integration/19106-274269/kubeconfig
	I0620 18:46:40.877322  480164 kubeconfig.go:62] /home/jenkins/minikube-integration/19106-274269/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-530880" cluster setting kubeconfig missing "no-preload-530880" context setting]
	I0620 18:46:40.877836  480164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19106-274269/kubeconfig: {Name:mke344b955a4582ad77895759c31c36670e563b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0620 18:46:40.879558  480164 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0620 18:46:40.891103  480164 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.76.2
	I0620 18:46:40.891140  480164 kubeadm.go:591] duration metric: took 25.369565ms to restartPrimaryControlPlane
	I0620 18:46:40.891151  480164 kubeadm.go:393] duration metric: took 104.252347ms to StartCluster
	I0620 18:46:40.891168  480164 settings.go:142] acquiring lock: {Name:mk5a1a69c9e50173b6bfe88004ea354d3f5ed8f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0620 18:46:40.891236  480164 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19106-274269/kubeconfig
	I0620 18:46:40.892192  480164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19106-274269/kubeconfig: {Name:mke344b955a4582ad77895759c31c36670e563b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0620 18:46:40.892388  480164 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0620 18:46:40.892671  480164 config.go:182] Loaded profile config "no-preload-530880": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.2
	I0620 18:46:40.892711  480164 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0620 18:46:40.892777  480164 addons.go:69] Setting storage-provisioner=true in profile "no-preload-530880"
	I0620 18:46:40.892804  480164 addons.go:234] Setting addon storage-provisioner=true in "no-preload-530880"
	W0620 18:46:40.892811  480164 addons.go:243] addon storage-provisioner should already be in state true
	I0620 18:46:40.892815  480164 addons.go:69] Setting default-storageclass=true in profile "no-preload-530880"
	I0620 18:46:40.892832  480164 host.go:66] Checking if "no-preload-530880" exists ...
	I0620 18:46:40.892849  480164 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-530880"
	I0620 18:46:40.893141  480164 cli_runner.go:164] Run: docker container inspect no-preload-530880 --format={{.State.Status}}
	I0620 18:46:40.893277  480164 cli_runner.go:164] Run: docker container inspect no-preload-530880 --format={{.State.Status}}
	I0620 18:46:40.893546  480164 addons.go:69] Setting dashboard=true in profile "no-preload-530880"
	I0620 18:46:40.893579  480164 addons.go:234] Setting addon dashboard=true in "no-preload-530880"
	W0620 18:46:40.893586  480164 addons.go:243] addon dashboard should already be in state true
	I0620 18:46:40.893620  480164 host.go:66] Checking if "no-preload-530880" exists ...
	I0620 18:46:40.894019  480164 cli_runner.go:164] Run: docker container inspect no-preload-530880 --format={{.State.Status}}
	I0620 18:46:40.894690  480164 addons.go:69] Setting metrics-server=true in profile "no-preload-530880"
	I0620 18:46:40.894725  480164 addons.go:234] Setting addon metrics-server=true in "no-preload-530880"
	W0620 18:46:40.894732  480164 addons.go:243] addon metrics-server should already be in state true
	I0620 18:46:40.894772  480164 host.go:66] Checking if "no-preload-530880" exists ...
	I0620 18:46:40.895267  480164 cli_runner.go:164] Run: docker container inspect no-preload-530880 --format={{.State.Status}}
	I0620 18:46:40.899828  480164 out.go:177] * Verifying Kubernetes components...
	I0620 18:46:40.908227  480164 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0620 18:46:40.951183  480164 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0620 18:46:40.959195  480164 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0620 18:46:40.959221  480164 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0620 18:46:40.959295  480164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-530880
	I0620 18:46:40.977041  480164 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0620 18:46:40.979150  480164 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0620 18:46:40.979173  480164 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0620 18:46:40.979261  480164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-530880
	I0620 18:46:40.987987  480164 addons.go:234] Setting addon default-storageclass=true in "no-preload-530880"
	W0620 18:46:40.988010  480164 addons.go:243] addon default-storageclass should already be in state true
	I0620 18:46:40.988034  480164 host.go:66] Checking if "no-preload-530880" exists ...
	I0620 18:46:40.988434  480164 cli_runner.go:164] Run: docker container inspect no-preload-530880 --format={{.State.Status}}
	I0620 18:46:40.997416  480164 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0620 18:46:41.001264  480164 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0620 18:46:39.349427  474995 pod_ready.go:102] pod "etcd-old-k8s-version-337794" in "kube-system" namespace has status "Ready":"False"
	I0620 18:46:41.849193  474995 pod_ready.go:102] pod "etcd-old-k8s-version-337794" in "kube-system" namespace has status "Ready":"False"
	I0620 18:46:41.004654  480164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33445 SSHKeyPath:/home/jenkins/minikube-integration/19106-274269/.minikube/machines/no-preload-530880/id_rsa Username:docker}
	I0620 18:46:41.005399  480164 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0620 18:46:41.005419  480164 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0620 18:46:41.005489  480164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-530880
	I0620 18:46:41.027130  480164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33445 SSHKeyPath:/home/jenkins/minikube-integration/19106-274269/.minikube/machines/no-preload-530880/id_rsa Username:docker}
	I0620 18:46:41.050937  480164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33445 SSHKeyPath:/home/jenkins/minikube-integration/19106-274269/.minikube/machines/no-preload-530880/id_rsa Username:docker}
	I0620 18:46:41.053279  480164 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0620 18:46:41.053297  480164 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0620 18:46:41.053357  480164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-530880
	I0620 18:46:41.087206  480164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33445 SSHKeyPath:/home/jenkins/minikube-integration/19106-274269/.minikube/machines/no-preload-530880/id_rsa Username:docker}
	I0620 18:46:41.133761  480164 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0620 18:46:41.193547  480164 node_ready.go:35] waiting up to 6m0s for node "no-preload-530880" to be "Ready" ...
	I0620 18:46:41.301540  480164 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0620 18:46:41.357989  480164 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0620 18:46:41.358014  480164 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0620 18:46:41.396847  480164 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0620 18:46:41.396871  480164 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0620 18:46:41.403585  480164 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0620 18:46:41.403616  480164 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0620 18:46:41.497032  480164 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0620 18:46:41.497059  480164 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0620 18:46:41.532428  480164 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0620 18:46:41.564030  480164 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0620 18:46:41.564068  480164 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0620 18:46:41.611737  480164 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0620 18:46:41.611763  480164 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0620 18:46:41.726092  480164 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0620 18:46:41.726117  480164 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W0620 18:46:41.797633  480164 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0620 18:46:41.797677  480164 retry.go:31] will retry after 242.093817ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0620 18:46:41.809959  480164 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0620 18:46:41.859306  480164 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0620 18:46:41.859335  480164 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0620 18:46:41.959027  480164 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0620 18:46:41.959055  480164 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0620 18:46:42.040949  480164 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0620 18:46:42.095408  480164 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0620 18:46:42.095448  480164 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0620 18:46:42.232007  480164 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0620 18:46:42.232036  480164 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0620 18:46:42.373714  480164 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0620 18:46:42.373741  480164 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0620 18:46:42.468898  480164 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0620 18:46:43.852419  474995 pod_ready.go:102] pod "etcd-old-k8s-version-337794" in "kube-system" namespace has status "Ready":"False"
	I0620 18:46:46.348582  474995 pod_ready.go:102] pod "etcd-old-k8s-version-337794" in "kube-system" namespace has status "Ready":"False"
	I0620 18:46:48.348879  474995 pod_ready.go:102] pod "etcd-old-k8s-version-337794" in "kube-system" namespace has status "Ready":"False"
	I0620 18:46:46.245722  480164 node_ready.go:49] node "no-preload-530880" has status "Ready":"True"
	I0620 18:46:46.245752  480164 node_ready.go:38] duration metric: took 5.052173168s for node "no-preload-530880" to be "Ready" ...
	I0620 18:46:46.245764  480164 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0620 18:46:46.280964  480164 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-c2msn" in "kube-system" namespace to be "Ready" ...
	I0620 18:46:46.291145  480164 pod_ready.go:92] pod "coredns-7db6d8ff4d-c2msn" in "kube-system" namespace has status "Ready":"True"
	I0620 18:46:46.291173  480164 pod_ready.go:81] duration metric: took 10.172755ms for pod "coredns-7db6d8ff4d-c2msn" in "kube-system" namespace to be "Ready" ...
	I0620 18:46:46.291186  480164 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-530880" in "kube-system" namespace to be "Ready" ...
	I0620 18:46:46.315110  480164 pod_ready.go:92] pod "etcd-no-preload-530880" in "kube-system" namespace has status "Ready":"True"
	I0620 18:46:46.315138  480164 pod_ready.go:81] duration metric: took 23.943907ms for pod "etcd-no-preload-530880" in "kube-system" namespace to be "Ready" ...
	I0620 18:46:46.315153  480164 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-530880" in "kube-system" namespace to be "Ready" ...
	I0620 18:46:46.322037  480164 pod_ready.go:92] pod "kube-apiserver-no-preload-530880" in "kube-system" namespace has status "Ready":"True"
	I0620 18:46:46.322063  480164 pod_ready.go:81] duration metric: took 6.901387ms for pod "kube-apiserver-no-preload-530880" in "kube-system" namespace to be "Ready" ...
	I0620 18:46:46.322083  480164 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-530880" in "kube-system" namespace to be "Ready" ...
	I0620 18:46:46.331396  480164 pod_ready.go:92] pod "kube-controller-manager-no-preload-530880" in "kube-system" namespace has status "Ready":"True"
	I0620 18:46:46.331425  480164 pod_ready.go:81] duration metric: took 9.333047ms for pod "kube-controller-manager-no-preload-530880" in "kube-system" namespace to be "Ready" ...
	I0620 18:46:46.331437  480164 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cqmb2" in "kube-system" namespace to be "Ready" ...
	I0620 18:46:46.444101  480164 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.911634788s)
	I0620 18:46:46.456594  480164 pod_ready.go:92] pod "kube-proxy-cqmb2" in "kube-system" namespace has status "Ready":"True"
	I0620 18:46:46.456621  480164 pod_ready.go:81] duration metric: took 125.176374ms for pod "kube-proxy-cqmb2" in "kube-system" namespace to be "Ready" ...
	I0620 18:46:46.456634  480164 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-530880" in "kube-system" namespace to be "Ready" ...
	I0620 18:46:48.465477  480164 pod_ready.go:102] pod "kube-scheduler-no-preload-530880" in "kube-system" namespace has status "Ready":"False"
	I0620 18:46:49.032981  480164 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.222980083s)
	I0620 18:46:49.033064  480164 addons.go:475] Verifying addon metrics-server=true in "no-preload-530880"
	I0620 18:46:49.283925  480164 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.242931386s)
	I0620 18:46:49.284082  480164 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.815151689s)
	I0620 18:46:49.288567  480164 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-530880 addons enable metrics-server
	
	I0620 18:46:49.290796  480164 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner, dashboard
	I0620 18:46:50.848699  474995 pod_ready.go:102] pod "etcd-old-k8s-version-337794" in "kube-system" namespace has status "Ready":"False"
	I0620 18:46:52.849772  474995 pod_ready.go:102] pod "etcd-old-k8s-version-337794" in "kube-system" namespace has status "Ready":"False"
	I0620 18:46:49.292903  480164 addons.go:510] duration metric: took 8.400182401s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner dashboard]
	I0620 18:46:50.964608  480164 pod_ready.go:102] pod "kube-scheduler-no-preload-530880" in "kube-system" namespace has status "Ready":"False"
	I0620 18:46:53.462407  480164 pod_ready.go:102] pod "kube-scheduler-no-preload-530880" in "kube-system" namespace has status "Ready":"False"
	I0620 18:46:55.356079  474995 pod_ready.go:102] pod "etcd-old-k8s-version-337794" in "kube-system" namespace has status "Ready":"False"
	I0620 18:46:57.853684  474995 pod_ready.go:102] pod "etcd-old-k8s-version-337794" in "kube-system" namespace has status "Ready":"False"
	I0620 18:46:55.462408  480164 pod_ready.go:92] pod "kube-scheduler-no-preload-530880" in "kube-system" namespace has status "Ready":"True"
	I0620 18:46:55.462436  480164 pod_ready.go:81] duration metric: took 9.005792911s for pod "kube-scheduler-no-preload-530880" in "kube-system" namespace to be "Ready" ...
	I0620 18:46:55.462447  480164 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace to be "Ready" ...
	I0620 18:46:57.468305  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:47:00.349893  474995 pod_ready.go:102] pod "etcd-old-k8s-version-337794" in "kube-system" namespace has status "Ready":"False"
	I0620 18:47:02.850400  474995 pod_ready.go:102] pod "etcd-old-k8s-version-337794" in "kube-system" namespace has status "Ready":"False"
	I0620 18:46:59.468548  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:47:01.469335  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:47:05.348417  474995 pod_ready.go:102] pod "etcd-old-k8s-version-337794" in "kube-system" namespace has status "Ready":"False"
	I0620 18:47:07.428563  474995 pod_ready.go:102] pod "etcd-old-k8s-version-337794" in "kube-system" namespace has status "Ready":"False"
	I0620 18:47:03.968711  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:47:05.969335  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:47:08.470232  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:47:09.851521  474995 pod_ready.go:102] pod "etcd-old-k8s-version-337794" in "kube-system" namespace has status "Ready":"False"
	I0620 18:47:12.349832  474995 pod_ready.go:102] pod "etcd-old-k8s-version-337794" in "kube-system" namespace has status "Ready":"False"
	I0620 18:47:10.476001  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:47:12.968381  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:47:14.849037  474995 pod_ready.go:102] pod "etcd-old-k8s-version-337794" in "kube-system" namespace has status "Ready":"False"
	I0620 18:47:15.849745  474995 pod_ready.go:92] pod "etcd-old-k8s-version-337794" in "kube-system" namespace has status "Ready":"True"
	I0620 18:47:15.849770  474995 pod_ready.go:81] duration metric: took 1m28.506965279s for pod "etcd-old-k8s-version-337794" in "kube-system" namespace to be "Ready" ...
	I0620 18:47:15.849786  474995 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-337794" in "kube-system" namespace to be "Ready" ...
	I0620 18:47:15.855468  474995 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-337794" in "kube-system" namespace has status "Ready":"True"
	I0620 18:47:15.855494  474995 pod_ready.go:81] duration metric: took 5.700262ms for pod "kube-apiserver-old-k8s-version-337794" in "kube-system" namespace to be "Ready" ...
	I0620 18:47:15.855508  474995 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-337794" in "kube-system" namespace to be "Ready" ...
	I0620 18:47:15.861138  474995 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-337794" in "kube-system" namespace has status "Ready":"True"
	I0620 18:47:15.861163  474995 pod_ready.go:81] duration metric: took 5.647503ms for pod "kube-controller-manager-old-k8s-version-337794" in "kube-system" namespace to be "Ready" ...
	I0620 18:47:15.861175  474995 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-h4r8m" in "kube-system" namespace to be "Ready" ...
	I0620 18:47:15.866333  474995 pod_ready.go:92] pod "kube-proxy-h4r8m" in "kube-system" namespace has status "Ready":"True"
	I0620 18:47:15.866357  474995 pod_ready.go:81] duration metric: took 5.175308ms for pod "kube-proxy-h4r8m" in "kube-system" namespace to be "Ready" ...
	I0620 18:47:15.866369  474995 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-337794" in "kube-system" namespace to be "Ready" ...
	I0620 18:47:15.872091  474995 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-337794" in "kube-system" namespace has status "Ready":"True"
	I0620 18:47:15.872116  474995 pod_ready.go:81] duration metric: took 5.738793ms for pod "kube-scheduler-old-k8s-version-337794" in "kube-system" namespace to be "Ready" ...
	I0620 18:47:15.872128  474995 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace to be "Ready" ...
	I0620 18:47:17.879482  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:47:14.968756  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:47:16.969141  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:47:20.377943  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:47:22.878380  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:47:18.970330  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:47:21.468639  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:47:24.879059  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:47:27.378857  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:47:23.969448  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:47:26.468379  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:47:29.880671  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:47:32.377891  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:47:28.969130  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:47:30.971859  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:47:33.471260  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:47:34.378925  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:47:36.879091  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:47:35.968545  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:47:37.969548  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:47:38.879398  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:47:41.378023  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:47:43.378583  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:47:40.469422  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:47:42.968242  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:47:45.878410  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:47:47.878747  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:47:44.968623  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:47:46.969344  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:47:50.378211  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:47:52.878819  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:47:49.468613  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:47:51.969063  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:47:55.377953  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:47:57.378340  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:47:54.468468  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:47:56.469209  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:47:59.879193  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:48:02.383795  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:47:58.969244  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:48:01.469479  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:48:04.878354  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:48:07.378889  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:48:03.968833  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:48:06.467988  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:48:08.468755  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:48:09.878115  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:48:11.878787  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:48:10.468796  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:48:12.968189  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:48:14.378388  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:48:16.878765  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:48:14.968417  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:48:17.468405  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:48:19.378075  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:48:21.878074  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:48:19.468545  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:48:21.468845  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:48:23.469553  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:48:23.878364  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:48:25.878833  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:48:28.378795  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:48:25.969358  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:48:28.469026  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:48:30.878837  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:48:32.879215  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:48:30.969598  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:48:33.468159  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:48:34.879379  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:48:37.378069  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:48:35.468613  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:48:37.473977  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:48:39.378631  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:48:41.878869  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:48:39.968966  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:48:42.468109  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:48:43.878920  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:48:46.377960  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:48:44.969234  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:48:47.468196  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:48:48.878314  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:48:51.378672  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:48:49.469405  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:48:51.968471  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:48:53.879513  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:48:56.378225  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:48:58.437149  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:48:53.968824  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:48:55.969713  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:48:58.469884  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:49:00.879888  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:49:03.378362  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:49:00.471244  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:49:02.969501  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:49:05.878776  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:49:08.377726  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:49:05.468440  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:49:07.469886  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:49:10.378674  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:49:12.878146  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:49:09.475391  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:49:11.968446  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:49:14.878576  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:49:16.878873  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:49:13.968754  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:49:15.970048  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:49:18.469269  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:49:19.378184  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:49:21.878629  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:49:20.969781  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:49:23.469264  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:49:23.879303  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:49:26.378224  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:49:25.476587  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:49:27.969040  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:49:28.877767  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:49:30.878808  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:49:33.378852  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:49:30.469814  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:49:32.969098  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:49:35.879565  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:49:38.378051  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:49:35.469179  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:49:37.968355  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:49:40.877938  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:49:42.878558  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:49:39.968680  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:49:42.468875  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:49:45.377841  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:49:47.378330  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:49:44.468905  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:49:46.968755  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:49:49.878410  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:49:52.378806  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:49:48.969195  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:49:51.469556  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:49:54.878721  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:49:57.378940  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:49:53.968349  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:49:55.968599  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:49:57.969680  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:49:59.879227  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:50:02.377976  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:50:00.470745  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:50:02.969715  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:50:04.378539  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:50:06.878453  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:50:05.468359  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:50:07.468725  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:50:09.378492  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:50:11.378532  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:50:09.968743  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:50:12.468322  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:50:13.883691  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:50:16.378521  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:50:14.468689  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:50:16.969615  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:50:18.878844  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:50:21.378500  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:50:23.378692  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:50:19.468790  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:50:21.968550  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:50:25.378781  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:50:27.380252  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:50:23.969562  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:50:26.469242  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:50:28.469657  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:50:29.880359  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:50:32.381202  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:50:30.971165  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:50:33.468427  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:50:34.879053  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:50:37.383884  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:50:35.469218  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:50:37.469381  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:50:39.399920  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:50:41.878946  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:50:39.969010  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:50:42.468622  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:50:44.377798  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:50:46.378331  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:50:48.378534  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:50:44.468786  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:50:46.969930  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:50:50.878749  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:50:53.378809  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:50:49.469702  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:50:51.968873  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:50:55.380295  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:50:57.884774  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:50:54.468345  480164 pod_ready.go:102] pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace has status "Ready":"False"
	I0620 18:50:55.473482  480164 pod_ready.go:81] duration metric: took 4m0.011020772s for pod "metrics-server-569cc877fc-spqvt" in "kube-system" namespace to be "Ready" ...
	E0620 18:50:55.473520  480164 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0620 18:50:55.473529  480164 pod_ready.go:38] duration metric: took 4m9.227754997s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0620 18:50:55.473541  480164 api_server.go:52] waiting for apiserver process to appear ...
	I0620 18:50:55.473570  480164 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0620 18:50:55.473630  480164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0620 18:50:55.514691  480164 cri.go:89] found id: "b4ff8b0c606a7e7063350d5957b42f0bc17dbffdfa353b2789e75a300d6e4d87"
	I0620 18:50:55.514714  480164 cri.go:89] found id: "32128302a97d35fac10fa7e033950a439ae38177c2719d53e457c20c630b1680"
	I0620 18:50:55.514719  480164 cri.go:89] found id: ""
	I0620 18:50:55.514726  480164 logs.go:276] 2 containers: [b4ff8b0c606a7e7063350d5957b42f0bc17dbffdfa353b2789e75a300d6e4d87 32128302a97d35fac10fa7e033950a439ae38177c2719d53e457c20c630b1680]
	I0620 18:50:55.514792  480164 ssh_runner.go:195] Run: which crictl
	I0620 18:50:55.518418  480164 ssh_runner.go:195] Run: which crictl
	I0620 18:50:55.521756  480164 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0620 18:50:55.521842  480164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0620 18:50:55.573717  480164 cri.go:89] found id: "7f8a08978329e7c1a1ec083b484e7f92ea8e21d7f9739a5b8c6027567d5c9aa8"
	I0620 18:50:55.573740  480164 cri.go:89] found id: "b518668f2794393f0d52a9522913041022eefae75705e048fb7ac0b731564f95"
	I0620 18:50:55.573746  480164 cri.go:89] found id: ""
	I0620 18:50:55.573754  480164 logs.go:276] 2 containers: [7f8a08978329e7c1a1ec083b484e7f92ea8e21d7f9739a5b8c6027567d5c9aa8 b518668f2794393f0d52a9522913041022eefae75705e048fb7ac0b731564f95]
	I0620 18:50:55.573808  480164 ssh_runner.go:195] Run: which crictl
	I0620 18:50:55.577345  480164 ssh_runner.go:195] Run: which crictl
	I0620 18:50:55.580601  480164 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0620 18:50:55.580669  480164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0620 18:50:55.626357  480164 cri.go:89] found id: "ca1e287c07f31d6bef3a1d780045b3a8e7710459444cfd0dc5aec40bdf817849"
	I0620 18:50:55.626435  480164 cri.go:89] found id: "f76e543336bf1cd1f5b883f0e280d4e501b27ffbda9d7f18459dd1225151e79d"
	I0620 18:50:55.626443  480164 cri.go:89] found id: ""
	I0620 18:50:55.626450  480164 logs.go:276] 2 containers: [ca1e287c07f31d6bef3a1d780045b3a8e7710459444cfd0dc5aec40bdf817849 f76e543336bf1cd1f5b883f0e280d4e501b27ffbda9d7f18459dd1225151e79d]
	I0620 18:50:55.626528  480164 ssh_runner.go:195] Run: which crictl
	I0620 18:50:55.630061  480164 ssh_runner.go:195] Run: which crictl
	I0620 18:50:55.633467  480164 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0620 18:50:55.633583  480164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0620 18:50:55.669841  480164 cri.go:89] found id: "7c9c3650337a9ad4c606fb7de7c270bc8a8cb013089e2b3a5447927a1ac37a66"
	I0620 18:50:55.669876  480164 cri.go:89] found id: "cb011b11c097c5a2e23ed0fb6b90fc4948c0a623b247d45e79a18c80783a4708"
	I0620 18:50:55.669881  480164 cri.go:89] found id: ""
	I0620 18:50:55.669888  480164 logs.go:276] 2 containers: [7c9c3650337a9ad4c606fb7de7c270bc8a8cb013089e2b3a5447927a1ac37a66 cb011b11c097c5a2e23ed0fb6b90fc4948c0a623b247d45e79a18c80783a4708]
	I0620 18:50:55.669946  480164 ssh_runner.go:195] Run: which crictl
	I0620 18:50:55.673672  480164 ssh_runner.go:195] Run: which crictl
	I0620 18:50:55.677051  480164 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0620 18:50:55.677130  480164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0620 18:50:55.721460  480164 cri.go:89] found id: "9fd9f632d01d3c34e1da4a8764e09e08aa6230252352139f058cd773a2d7b242"
	I0620 18:50:55.721491  480164 cri.go:89] found id: "41587230eb876d5c499563d181931de04c95ecb7bdbe809d3e232e7d942a4eea"
	I0620 18:50:55.721497  480164 cri.go:89] found id: ""
	I0620 18:50:55.721504  480164 logs.go:276] 2 containers: [9fd9f632d01d3c34e1da4a8764e09e08aa6230252352139f058cd773a2d7b242 41587230eb876d5c499563d181931de04c95ecb7bdbe809d3e232e7d942a4eea]
	I0620 18:50:55.721559  480164 ssh_runner.go:195] Run: which crictl
	I0620 18:50:55.725033  480164 ssh_runner.go:195] Run: which crictl
	I0620 18:50:55.728541  480164 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0620 18:50:55.728678  480164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0620 18:50:55.765240  480164 cri.go:89] found id: "299b91aff4c86ae8dc85313f36a5f9dc38efbd838ecfd8c78040a9d648044751"
	I0620 18:50:55.765264  480164 cri.go:89] found id: "5806db24329a0a8150b8f7815a0272af79e74d2760f6a31d23460e5328e7cb1b"
	I0620 18:50:55.765269  480164 cri.go:89] found id: ""
	I0620 18:50:55.765276  480164 logs.go:276] 2 containers: [299b91aff4c86ae8dc85313f36a5f9dc38efbd838ecfd8c78040a9d648044751 5806db24329a0a8150b8f7815a0272af79e74d2760f6a31d23460e5328e7cb1b]
	I0620 18:50:55.765352  480164 ssh_runner.go:195] Run: which crictl
	I0620 18:50:55.769097  480164 ssh_runner.go:195] Run: which crictl
	I0620 18:50:55.773829  480164 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0620 18:50:55.773917  480164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0620 18:50:55.820238  480164 cri.go:89] found id: "1a4909130a39c512a4d52fe89bf1c437e8a29f1d6cd2874c81bd4302ad52a1db"
	I0620 18:50:55.820260  480164 cri.go:89] found id: "3dae3d565289125ddcc1df2e658f60655cd8373cd01a1b71c4f6b7feeaf3b7ab"
	I0620 18:50:55.820265  480164 cri.go:89] found id: ""
	I0620 18:50:55.820272  480164 logs.go:276] 2 containers: [1a4909130a39c512a4d52fe89bf1c437e8a29f1d6cd2874c81bd4302ad52a1db 3dae3d565289125ddcc1df2e658f60655cd8373cd01a1b71c4f6b7feeaf3b7ab]
	I0620 18:50:55.820375  480164 ssh_runner.go:195] Run: which crictl
	I0620 18:50:55.824192  480164 ssh_runner.go:195] Run: which crictl
	I0620 18:50:55.827525  480164 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0620 18:50:55.827600  480164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0620 18:50:55.867461  480164 cri.go:89] found id: "2ab864d8d9dac64f868645147111f7e876b80bea9d07a7cce1391ea1d8f0b803"
	I0620 18:50:55.867485  480164 cri.go:89] found id: ""
	I0620 18:50:55.867493  480164 logs.go:276] 1 containers: [2ab864d8d9dac64f868645147111f7e876b80bea9d07a7cce1391ea1d8f0b803]
	I0620 18:50:55.867566  480164 ssh_runner.go:195] Run: which crictl
	I0620 18:50:55.871957  480164 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0620 18:50:55.872076  480164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0620 18:50:55.920463  480164 cri.go:89] found id: "3a5532e08afdf23ab38d0c36d399d912a8e9b0a53300a49eb09d3a8a89eadc32"
	I0620 18:50:55.920487  480164 cri.go:89] found id: "8e0fbeb1faaf04145fb91474a4fefe89d03863858960210a2d6fe1c0470e27b2"
	I0620 18:50:55.920492  480164 cri.go:89] found id: ""
	I0620 18:50:55.920499  480164 logs.go:276] 2 containers: [3a5532e08afdf23ab38d0c36d399d912a8e9b0a53300a49eb09d3a8a89eadc32 8e0fbeb1faaf04145fb91474a4fefe89d03863858960210a2d6fe1c0470e27b2]
	I0620 18:50:55.920553  480164 ssh_runner.go:195] Run: which crictl
	I0620 18:50:55.924247  480164 ssh_runner.go:195] Run: which crictl
	I0620 18:50:55.927751  480164 logs.go:123] Gathering logs for kube-apiserver [b4ff8b0c606a7e7063350d5957b42f0bc17dbffdfa353b2789e75a300d6e4d87] ...
	I0620 18:50:55.927801  480164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4ff8b0c606a7e7063350d5957b42f0bc17dbffdfa353b2789e75a300d6e4d87"
	I0620 18:50:55.985958  480164 logs.go:123] Gathering logs for etcd [7f8a08978329e7c1a1ec083b484e7f92ea8e21d7f9739a5b8c6027567d5c9aa8] ...
	I0620 18:50:55.985994  480164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7f8a08978329e7c1a1ec083b484e7f92ea8e21d7f9739a5b8c6027567d5c9aa8"
	I0620 18:50:56.032502  480164 logs.go:123] Gathering logs for kube-scheduler [cb011b11c097c5a2e23ed0fb6b90fc4948c0a623b247d45e79a18c80783a4708] ...
	I0620 18:50:56.032535  480164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb011b11c097c5a2e23ed0fb6b90fc4948c0a623b247d45e79a18c80783a4708"
	I0620 18:50:56.088208  480164 logs.go:123] Gathering logs for storage-provisioner [3a5532e08afdf23ab38d0c36d399d912a8e9b0a53300a49eb09d3a8a89eadc32] ...
	I0620 18:50:56.088238  480164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a5532e08afdf23ab38d0c36d399d912a8e9b0a53300a49eb09d3a8a89eadc32"
	I0620 18:50:56.128334  480164 logs.go:123] Gathering logs for containerd ...
	I0620 18:50:56.128363  480164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0620 18:50:56.200551  480164 logs.go:123] Gathering logs for kubelet ...
	I0620 18:50:56.200585  480164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0620 18:50:56.246794  480164 logs.go:138] Found kubelet problem: Jun 20 18:46:59 no-preload-530880 kubelet[658]: W0620 18:46:59.268543     658 reflector.go:547] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-530880" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-530880' and this object
	W0620 18:50:56.247102  480164 logs.go:138] Found kubelet problem: Jun 20 18:46:59 no-preload-530880 kubelet[658]: E0620 18:46:59.268604     658 reflector.go:150] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-530880" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-530880' and this object
	I0620 18:50:56.277329  480164 logs.go:123] Gathering logs for dmesg ...
	I0620 18:50:56.277365  480164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0620 18:50:56.302498  480164 logs.go:123] Gathering logs for describe nodes ...
	I0620 18:50:56.302529  480164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0620 18:50:56.462446  480164 logs.go:123] Gathering logs for kube-proxy [41587230eb876d5c499563d181931de04c95ecb7bdbe809d3e232e7d942a4eea] ...
	I0620 18:50:56.462482  480164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 41587230eb876d5c499563d181931de04c95ecb7bdbe809d3e232e7d942a4eea"
	I0620 18:50:56.507750  480164 logs.go:123] Gathering logs for kube-controller-manager [299b91aff4c86ae8dc85313f36a5f9dc38efbd838ecfd8c78040a9d648044751] ...
	I0620 18:50:56.507830  480164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 299b91aff4c86ae8dc85313f36a5f9dc38efbd838ecfd8c78040a9d648044751"
	I0620 18:50:56.575111  480164 logs.go:123] Gathering logs for storage-provisioner [8e0fbeb1faaf04145fb91474a4fefe89d03863858960210a2d6fe1c0470e27b2] ...
	I0620 18:50:56.575153  480164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e0fbeb1faaf04145fb91474a4fefe89d03863858960210a2d6fe1c0470e27b2"
	I0620 18:50:56.639545  480164 logs.go:123] Gathering logs for kube-proxy [9fd9f632d01d3c34e1da4a8764e09e08aa6230252352139f058cd773a2d7b242] ...
	I0620 18:50:56.639581  480164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9fd9f632d01d3c34e1da4a8764e09e08aa6230252352139f058cd773a2d7b242"
	I0620 18:50:56.697143  480164 logs.go:123] Gathering logs for kube-apiserver [32128302a97d35fac10fa7e033950a439ae38177c2719d53e457c20c630b1680] ...
	I0620 18:50:56.697171  480164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 32128302a97d35fac10fa7e033950a439ae38177c2719d53e457c20c630b1680"
	I0620 18:50:56.752636  480164 logs.go:123] Gathering logs for coredns [ca1e287c07f31d6bef3a1d780045b3a8e7710459444cfd0dc5aec40bdf817849] ...
	I0620 18:50:56.752681  480164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca1e287c07f31d6bef3a1d780045b3a8e7710459444cfd0dc5aec40bdf817849"
	I0620 18:50:56.794431  480164 logs.go:123] Gathering logs for coredns [f76e543336bf1cd1f5b883f0e280d4e501b27ffbda9d7f18459dd1225151e79d] ...
	I0620 18:50:56.794458  480164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f76e543336bf1cd1f5b883f0e280d4e501b27ffbda9d7f18459dd1225151e79d"
	I0620 18:50:56.834848  480164 logs.go:123] Gathering logs for kindnet [1a4909130a39c512a4d52fe89bf1c437e8a29f1d6cd2874c81bd4302ad52a1db] ...
	I0620 18:50:56.834880  480164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a4909130a39c512a4d52fe89bf1c437e8a29f1d6cd2874c81bd4302ad52a1db"
	I0620 18:50:56.891587  480164 logs.go:123] Gathering logs for kindnet [3dae3d565289125ddcc1df2e658f60655cd8373cd01a1b71c4f6b7feeaf3b7ab] ...
	I0620 18:50:56.891626  480164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3dae3d565289125ddcc1df2e658f60655cd8373cd01a1b71c4f6b7feeaf3b7ab"
	I0620 18:50:56.935878  480164 logs.go:123] Gathering logs for kubernetes-dashboard [2ab864d8d9dac64f868645147111f7e876b80bea9d07a7cce1391ea1d8f0b803] ...
	I0620 18:50:56.935908  480164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ab864d8d9dac64f868645147111f7e876b80bea9d07a7cce1391ea1d8f0b803"
	I0620 18:50:56.983809  480164 logs.go:123] Gathering logs for container status ...
	I0620 18:50:56.983840  480164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0620 18:50:57.030346  480164 logs.go:123] Gathering logs for etcd [b518668f2794393f0d52a9522913041022eefae75705e048fb7ac0b731564f95] ...
	I0620 18:50:57.030380  480164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b518668f2794393f0d52a9522913041022eefae75705e048fb7ac0b731564f95"
	I0620 18:50:57.089706  480164 logs.go:123] Gathering logs for kube-scheduler [7c9c3650337a9ad4c606fb7de7c270bc8a8cb013089e2b3a5447927a1ac37a66] ...
	I0620 18:50:57.089744  480164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c9c3650337a9ad4c606fb7de7c270bc8a8cb013089e2b3a5447927a1ac37a66"
	I0620 18:50:57.142403  480164 logs.go:123] Gathering logs for kube-controller-manager [5806db24329a0a8150b8f7815a0272af79e74d2760f6a31d23460e5328e7cb1b] ...
	I0620 18:50:57.142438  480164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5806db24329a0a8150b8f7815a0272af79e74d2760f6a31d23460e5328e7cb1b"
	I0620 18:50:57.202578  480164 out.go:304] Setting ErrFile to fd 2...
	I0620 18:50:57.202604  480164 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0620 18:50:57.202684  480164 out.go:239] X Problems detected in kubelet:
	W0620 18:50:57.202697  480164 out.go:239]   Jun 20 18:46:59 no-preload-530880 kubelet[658]: W0620 18:46:59.268543     658 reflector.go:547] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-530880" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-530880' and this object
	W0620 18:50:57.202724  480164 out.go:239]   Jun 20 18:46:59 no-preload-530880 kubelet[658]: E0620 18:46:59.268604     658 reflector.go:150] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-530880" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-530880' and this object
	I0620 18:50:57.202740  480164 out.go:304] Setting ErrFile to fd 2...
	I0620 18:50:57.202746  480164 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0620 18:51:00.379513  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:51:02.878956  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:51:05.378040  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:51:07.378498  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:51:07.203436  480164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0620 18:51:07.215936  480164 api_server.go:72] duration metric: took 4m26.323510874s to wait for apiserver process to appear ...
	I0620 18:51:07.215959  480164 api_server.go:88] waiting for apiserver healthz status ...
	I0620 18:51:07.215997  480164 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0620 18:51:07.216054  480164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0620 18:51:07.256000  480164 cri.go:89] found id: "b4ff8b0c606a7e7063350d5957b42f0bc17dbffdfa353b2789e75a300d6e4d87"
	I0620 18:51:07.256019  480164 cri.go:89] found id: "32128302a97d35fac10fa7e033950a439ae38177c2719d53e457c20c630b1680"
	I0620 18:51:07.256024  480164 cri.go:89] found id: ""
	I0620 18:51:07.256035  480164 logs.go:276] 2 containers: [b4ff8b0c606a7e7063350d5957b42f0bc17dbffdfa353b2789e75a300d6e4d87 32128302a97d35fac10fa7e033950a439ae38177c2719d53e457c20c630b1680]
	I0620 18:51:07.256090  480164 ssh_runner.go:195] Run: which crictl
	I0620 18:51:07.260153  480164 ssh_runner.go:195] Run: which crictl
	I0620 18:51:07.263443  480164 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0620 18:51:07.263509  480164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0620 18:51:07.299638  480164 cri.go:89] found id: "7f8a08978329e7c1a1ec083b484e7f92ea8e21d7f9739a5b8c6027567d5c9aa8"
	I0620 18:51:07.299658  480164 cri.go:89] found id: "b518668f2794393f0d52a9522913041022eefae75705e048fb7ac0b731564f95"
	I0620 18:51:07.299663  480164 cri.go:89] found id: ""
	I0620 18:51:07.299669  480164 logs.go:276] 2 containers: [7f8a08978329e7c1a1ec083b484e7f92ea8e21d7f9739a5b8c6027567d5c9aa8 b518668f2794393f0d52a9522913041022eefae75705e048fb7ac0b731564f95]
	I0620 18:51:07.299720  480164 ssh_runner.go:195] Run: which crictl
	I0620 18:51:07.304848  480164 ssh_runner.go:195] Run: which crictl
	I0620 18:51:07.308725  480164 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0620 18:51:07.308799  480164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0620 18:51:07.347692  480164 cri.go:89] found id: "ca1e287c07f31d6bef3a1d780045b3a8e7710459444cfd0dc5aec40bdf817849"
	I0620 18:51:07.347713  480164 cri.go:89] found id: "f76e543336bf1cd1f5b883f0e280d4e501b27ffbda9d7f18459dd1225151e79d"
	I0620 18:51:07.347718  480164 cri.go:89] found id: ""
	I0620 18:51:07.347726  480164 logs.go:276] 2 containers: [ca1e287c07f31d6bef3a1d780045b3a8e7710459444cfd0dc5aec40bdf817849 f76e543336bf1cd1f5b883f0e280d4e501b27ffbda9d7f18459dd1225151e79d]
	I0620 18:51:07.347780  480164 ssh_runner.go:195] Run: which crictl
	I0620 18:51:07.351870  480164 ssh_runner.go:195] Run: which crictl
	I0620 18:51:07.355867  480164 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0620 18:51:07.355944  480164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0620 18:51:07.398702  480164 cri.go:89] found id: "7c9c3650337a9ad4c606fb7de7c270bc8a8cb013089e2b3a5447927a1ac37a66"
	I0620 18:51:07.398727  480164 cri.go:89] found id: "cb011b11c097c5a2e23ed0fb6b90fc4948c0a623b247d45e79a18c80783a4708"
	I0620 18:51:07.398732  480164 cri.go:89] found id: ""
	I0620 18:51:07.398739  480164 logs.go:276] 2 containers: [7c9c3650337a9ad4c606fb7de7c270bc8a8cb013089e2b3a5447927a1ac37a66 cb011b11c097c5a2e23ed0fb6b90fc4948c0a623b247d45e79a18c80783a4708]
	I0620 18:51:07.398794  480164 ssh_runner.go:195] Run: which crictl
	I0620 18:51:07.402252  480164 ssh_runner.go:195] Run: which crictl
	I0620 18:51:07.405756  480164 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0620 18:51:07.405895  480164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0620 18:51:07.442924  480164 cri.go:89] found id: "9fd9f632d01d3c34e1da4a8764e09e08aa6230252352139f058cd773a2d7b242"
	I0620 18:51:07.442944  480164 cri.go:89] found id: "41587230eb876d5c499563d181931de04c95ecb7bdbe809d3e232e7d942a4eea"
	I0620 18:51:07.442948  480164 cri.go:89] found id: ""
	I0620 18:51:07.442955  480164 logs.go:276] 2 containers: [9fd9f632d01d3c34e1da4a8764e09e08aa6230252352139f058cd773a2d7b242 41587230eb876d5c499563d181931de04c95ecb7bdbe809d3e232e7d942a4eea]
	I0620 18:51:07.443049  480164 ssh_runner.go:195] Run: which crictl
	I0620 18:51:07.446976  480164 ssh_runner.go:195] Run: which crictl
	I0620 18:51:07.450477  480164 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0620 18:51:07.450586  480164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0620 18:51:07.489887  480164 cri.go:89] found id: "299b91aff4c86ae8dc85313f36a5f9dc38efbd838ecfd8c78040a9d648044751"
	I0620 18:51:07.489908  480164 cri.go:89] found id: "5806db24329a0a8150b8f7815a0272af79e74d2760f6a31d23460e5328e7cb1b"
	I0620 18:51:07.489913  480164 cri.go:89] found id: ""
	I0620 18:51:07.489921  480164 logs.go:276] 2 containers: [299b91aff4c86ae8dc85313f36a5f9dc38efbd838ecfd8c78040a9d648044751 5806db24329a0a8150b8f7815a0272af79e74d2760f6a31d23460e5328e7cb1b]
	I0620 18:51:07.489981  480164 ssh_runner.go:195] Run: which crictl
	I0620 18:51:07.493541  480164 ssh_runner.go:195] Run: which crictl
	I0620 18:51:07.497037  480164 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0620 18:51:07.497136  480164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0620 18:51:07.534395  480164 cri.go:89] found id: "1a4909130a39c512a4d52fe89bf1c437e8a29f1d6cd2874c81bd4302ad52a1db"
	I0620 18:51:07.534419  480164 cri.go:89] found id: "3dae3d565289125ddcc1df2e658f60655cd8373cd01a1b71c4f6b7feeaf3b7ab"
	I0620 18:51:07.534424  480164 cri.go:89] found id: ""
	I0620 18:51:07.534432  480164 logs.go:276] 2 containers: [1a4909130a39c512a4d52fe89bf1c437e8a29f1d6cd2874c81bd4302ad52a1db 3dae3d565289125ddcc1df2e658f60655cd8373cd01a1b71c4f6b7feeaf3b7ab]
	I0620 18:51:07.534486  480164 ssh_runner.go:195] Run: which crictl
	I0620 18:51:07.538140  480164 ssh_runner.go:195] Run: which crictl
	I0620 18:51:07.541426  480164 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0620 18:51:07.541503  480164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0620 18:51:07.579889  480164 cri.go:89] found id: "2ab864d8d9dac64f868645147111f7e876b80bea9d07a7cce1391ea1d8f0b803"
	I0620 18:51:07.579918  480164 cri.go:89] found id: ""
	I0620 18:51:07.579933  480164 logs.go:276] 1 containers: [2ab864d8d9dac64f868645147111f7e876b80bea9d07a7cce1391ea1d8f0b803]
	I0620 18:51:07.580002  480164 ssh_runner.go:195] Run: which crictl
	I0620 18:51:07.583726  480164 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0620 18:51:07.583805  480164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0620 18:51:07.620737  480164 cri.go:89] found id: "3a5532e08afdf23ab38d0c36d399d912a8e9b0a53300a49eb09d3a8a89eadc32"
	I0620 18:51:07.620760  480164 cri.go:89] found id: "8e0fbeb1faaf04145fb91474a4fefe89d03863858960210a2d6fe1c0470e27b2"
	I0620 18:51:07.620765  480164 cri.go:89] found id: ""
	I0620 18:51:07.620773  480164 logs.go:276] 2 containers: [3a5532e08afdf23ab38d0c36d399d912a8e9b0a53300a49eb09d3a8a89eadc32 8e0fbeb1faaf04145fb91474a4fefe89d03863858960210a2d6fe1c0470e27b2]
	I0620 18:51:07.620830  480164 ssh_runner.go:195] Run: which crictl
	I0620 18:51:07.624484  480164 ssh_runner.go:195] Run: which crictl
	I0620 18:51:07.628011  480164 logs.go:123] Gathering logs for dmesg ...
	I0620 18:51:07.628034  480164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0620 18:51:07.651740  480164 logs.go:123] Gathering logs for coredns [ca1e287c07f31d6bef3a1d780045b3a8e7710459444cfd0dc5aec40bdf817849] ...
	I0620 18:51:07.651856  480164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca1e287c07f31d6bef3a1d780045b3a8e7710459444cfd0dc5aec40bdf817849"
	I0620 18:51:07.704416  480164 logs.go:123] Gathering logs for kube-proxy [9fd9f632d01d3c34e1da4a8764e09e08aa6230252352139f058cd773a2d7b242] ...
	I0620 18:51:07.704453  480164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9fd9f632d01d3c34e1da4a8764e09e08aa6230252352139f058cd773a2d7b242"
	I0620 18:51:07.751696  480164 logs.go:123] Gathering logs for storage-provisioner [3a5532e08afdf23ab38d0c36d399d912a8e9b0a53300a49eb09d3a8a89eadc32] ...
	I0620 18:51:07.751728  480164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a5532e08afdf23ab38d0c36d399d912a8e9b0a53300a49eb09d3a8a89eadc32"
	I0620 18:51:07.797712  480164 logs.go:123] Gathering logs for storage-provisioner [8e0fbeb1faaf04145fb91474a4fefe89d03863858960210a2d6fe1c0470e27b2] ...
	I0620 18:51:07.797740  480164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e0fbeb1faaf04145fb91474a4fefe89d03863858960210a2d6fe1c0470e27b2"
	I0620 18:51:07.841615  480164 logs.go:123] Gathering logs for describe nodes ...
	I0620 18:51:07.841644  480164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0620 18:51:07.982965  480164 logs.go:123] Gathering logs for etcd [7f8a08978329e7c1a1ec083b484e7f92ea8e21d7f9739a5b8c6027567d5c9aa8] ...
	I0620 18:51:07.982995  480164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7f8a08978329e7c1a1ec083b484e7f92ea8e21d7f9739a5b8c6027567d5c9aa8"
	I0620 18:51:08.041320  480164 logs.go:123] Gathering logs for kube-proxy [41587230eb876d5c499563d181931de04c95ecb7bdbe809d3e232e7d942a4eea] ...
	I0620 18:51:08.041352  480164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 41587230eb876d5c499563d181931de04c95ecb7bdbe809d3e232e7d942a4eea"
	I0620 18:51:08.083178  480164 logs.go:123] Gathering logs for kindnet [3dae3d565289125ddcc1df2e658f60655cd8373cd01a1b71c4f6b7feeaf3b7ab] ...
	I0620 18:51:08.083207  480164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3dae3d565289125ddcc1df2e658f60655cd8373cd01a1b71c4f6b7feeaf3b7ab"
	I0620 18:51:08.123198  480164 logs.go:123] Gathering logs for kubelet ...
	I0620 18:51:08.123228  480164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0620 18:51:08.168207  480164 logs.go:138] Found kubelet problem: Jun 20 18:46:59 no-preload-530880 kubelet[658]: W0620 18:46:59.268543     658 reflector.go:547] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-530880" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-530880' and this object
	W0620 18:51:08.168444  480164 logs.go:138] Found kubelet problem: Jun 20 18:46:59 no-preload-530880 kubelet[658]: E0620 18:46:59.268604     658 reflector.go:150] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-530880" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-530880' and this object
	I0620 18:51:08.198867  480164 logs.go:123] Gathering logs for etcd [b518668f2794393f0d52a9522913041022eefae75705e048fb7ac0b731564f95] ...
	I0620 18:51:08.198899  480164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b518668f2794393f0d52a9522913041022eefae75705e048fb7ac0b731564f95"
	I0620 18:51:08.250957  480164 logs.go:123] Gathering logs for kube-scheduler [7c9c3650337a9ad4c606fb7de7c270bc8a8cb013089e2b3a5447927a1ac37a66] ...
	I0620 18:51:08.250988  480164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c9c3650337a9ad4c606fb7de7c270bc8a8cb013089e2b3a5447927a1ac37a66"
	I0620 18:51:08.300365  480164 logs.go:123] Gathering logs for containerd ...
	I0620 18:51:08.300393  480164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0620 18:51:08.364121  480164 logs.go:123] Gathering logs for kube-controller-manager [299b91aff4c86ae8dc85313f36a5f9dc38efbd838ecfd8c78040a9d648044751] ...
	I0620 18:51:08.364159  480164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 299b91aff4c86ae8dc85313f36a5f9dc38efbd838ecfd8c78040a9d648044751"
	I0620 18:51:08.437657  480164 logs.go:123] Gathering logs for kube-controller-manager [5806db24329a0a8150b8f7815a0272af79e74d2760f6a31d23460e5328e7cb1b] ...
	I0620 18:51:08.437694  480164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5806db24329a0a8150b8f7815a0272af79e74d2760f6a31d23460e5328e7cb1b"
	I0620 18:51:09.379657  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:51:11.878526  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:51:08.497163  480164 logs.go:123] Gathering logs for kindnet [1a4909130a39c512a4d52fe89bf1c437e8a29f1d6cd2874c81bd4302ad52a1db] ...
	I0620 18:51:08.497199  480164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a4909130a39c512a4d52fe89bf1c437e8a29f1d6cd2874c81bd4302ad52a1db"
	I0620 18:51:08.548434  480164 logs.go:123] Gathering logs for kubernetes-dashboard [2ab864d8d9dac64f868645147111f7e876b80bea9d07a7cce1391ea1d8f0b803] ...
	I0620 18:51:08.548466  480164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ab864d8d9dac64f868645147111f7e876b80bea9d07a7cce1391ea1d8f0b803"
	I0620 18:51:08.587538  480164 logs.go:123] Gathering logs for kube-apiserver [b4ff8b0c606a7e7063350d5957b42f0bc17dbffdfa353b2789e75a300d6e4d87] ...
	I0620 18:51:08.587572  480164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4ff8b0c606a7e7063350d5957b42f0bc17dbffdfa353b2789e75a300d6e4d87"
	I0620 18:51:08.641438  480164 logs.go:123] Gathering logs for kube-apiserver [32128302a97d35fac10fa7e033950a439ae38177c2719d53e457c20c630b1680] ...
	I0620 18:51:08.641472  480164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 32128302a97d35fac10fa7e033950a439ae38177c2719d53e457c20c630b1680"
	I0620 18:51:08.701327  480164 logs.go:123] Gathering logs for coredns [f76e543336bf1cd1f5b883f0e280d4e501b27ffbda9d7f18459dd1225151e79d] ...
	I0620 18:51:08.701607  480164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f76e543336bf1cd1f5b883f0e280d4e501b27ffbda9d7f18459dd1225151e79d"
	I0620 18:51:08.747259  480164 logs.go:123] Gathering logs for kube-scheduler [cb011b11c097c5a2e23ed0fb6b90fc4948c0a623b247d45e79a18c80783a4708] ...
	I0620 18:51:08.747288  480164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb011b11c097c5a2e23ed0fb6b90fc4948c0a623b247d45e79a18c80783a4708"
	I0620 18:51:08.790082  480164 logs.go:123] Gathering logs for container status ...
	I0620 18:51:08.790112  480164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0620 18:51:08.852607  480164 out.go:304] Setting ErrFile to fd 2...
	I0620 18:51:08.852635  480164 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0620 18:51:08.852691  480164 out.go:239] X Problems detected in kubelet:
	W0620 18:51:08.852703  480164 out.go:239]   Jun 20 18:46:59 no-preload-530880 kubelet[658]: W0620 18:46:59.268543     658 reflector.go:547] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-530880" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-530880' and this object
	W0620 18:51:08.852711  480164 out.go:239]   Jun 20 18:46:59 no-preload-530880 kubelet[658]: E0620 18:46:59.268604     658 reflector.go:150] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-530880" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-530880' and this object
	I0620 18:51:08.852721  480164 out.go:304] Setting ErrFile to fd 2...
	I0620 18:51:08.852727  480164 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0620 18:51:14.378391  474995 pod_ready.go:102] pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace has status "Ready":"False"
	I0620 18:51:15.878416  474995 pod_ready.go:81] duration metric: took 4m0.006273918s for pod "metrics-server-9975d5f86-s95qr" in "kube-system" namespace to be "Ready" ...
	E0620 18:51:15.878443  474995 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0620 18:51:15.878452  474995 pod_ready.go:38] duration metric: took 5m28.813628413s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0620 18:51:15.878466  474995 api_server.go:52] waiting for apiserver process to appear ...
	I0620 18:51:15.878494  474995 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0620 18:51:15.878557  474995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0620 18:51:15.917287  474995 cri.go:89] found id: "ab9a5dc87140998b57eded669bdfe78785f54034fe3076c3679faee0c5e8b168"
	I0620 18:51:15.917312  474995 cri.go:89] found id: "cfdcf7bafc98db0ed8b117a7f81af17f4aa0ffe0d900af8fb469323b24ada207"
	I0620 18:51:15.917317  474995 cri.go:89] found id: ""
	I0620 18:51:15.917324  474995 logs.go:276] 2 containers: [ab9a5dc87140998b57eded669bdfe78785f54034fe3076c3679faee0c5e8b168 cfdcf7bafc98db0ed8b117a7f81af17f4aa0ffe0d900af8fb469323b24ada207]
	I0620 18:51:15.917380  474995 ssh_runner.go:195] Run: which crictl
	I0620 18:51:15.921092  474995 ssh_runner.go:195] Run: which crictl
	I0620 18:51:15.924286  474995 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0620 18:51:15.924366  474995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0620 18:51:15.962596  474995 cri.go:89] found id: "d10f227a579ece3ba7f135984b5533ab1954b830dbfcff6951b3b52ca343d8c6"
	I0620 18:51:15.962617  474995 cri.go:89] found id: "78a0fe9b19212a5f753e82266fc6eda58535c93f7b45a9f7ae1027d529ba1b9f"
	I0620 18:51:15.962622  474995 cri.go:89] found id: ""
	I0620 18:51:15.962629  474995 logs.go:276] 2 containers: [d10f227a579ece3ba7f135984b5533ab1954b830dbfcff6951b3b52ca343d8c6 78a0fe9b19212a5f753e82266fc6eda58535c93f7b45a9f7ae1027d529ba1b9f]
	I0620 18:51:15.962688  474995 ssh_runner.go:195] Run: which crictl
	I0620 18:51:15.966456  474995 ssh_runner.go:195] Run: which crictl
	I0620 18:51:15.969955  474995 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0620 18:51:15.970053  474995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0620 18:51:16.011278  474995 cri.go:89] found id: "3b0caff456f550681c8dc6f7ce0c9af69317ec12ac39d8a6e1fa2040ea4c8a50"
	I0620 18:51:16.011303  474995 cri.go:89] found id: "41ba1c27ef09f4269353d6500282257c6dc0ee5cc4906cc9d3f99fdacb2835c1"
	I0620 18:51:16.011308  474995 cri.go:89] found id: ""
	I0620 18:51:16.011316  474995 logs.go:276] 2 containers: [3b0caff456f550681c8dc6f7ce0c9af69317ec12ac39d8a6e1fa2040ea4c8a50 41ba1c27ef09f4269353d6500282257c6dc0ee5cc4906cc9d3f99fdacb2835c1]
	I0620 18:51:16.011380  474995 ssh_runner.go:195] Run: which crictl
	I0620 18:51:16.020059  474995 ssh_runner.go:195] Run: which crictl
	I0620 18:51:16.023712  474995 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0620 18:51:16.023787  474995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0620 18:51:16.066563  474995 cri.go:89] found id: "216bb91d7edcfdb7e26e452ac2980c11fb6d169bc9ec8ff9d4125cefc89b4af8"
	I0620 18:51:16.066595  474995 cri.go:89] found id: "e895300ffd7e6f784d9567baacfb9b233853b31245990e391675289e62ad7659"
	I0620 18:51:16.066604  474995 cri.go:89] found id: ""
	I0620 18:51:16.066611  474995 logs.go:276] 2 containers: [216bb91d7edcfdb7e26e452ac2980c11fb6d169bc9ec8ff9d4125cefc89b4af8 e895300ffd7e6f784d9567baacfb9b233853b31245990e391675289e62ad7659]
	I0620 18:51:16.066671  474995 ssh_runner.go:195] Run: which crictl
	I0620 18:51:16.070402  474995 ssh_runner.go:195] Run: which crictl
	I0620 18:51:16.074053  474995 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0620 18:51:16.074152  474995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0620 18:51:16.120292  474995 cri.go:89] found id: "fd105e227c64f742488149d7a7ef12f65582c9a17ebf40af0d31772fff23b850"
	I0620 18:51:16.120316  474995 cri.go:89] found id: "0084076706589b3159aa05b37a0ef12c102644140788b191724b0f7267fd4e3c"
	I0620 18:51:16.120322  474995 cri.go:89] found id: ""
	I0620 18:51:16.120329  474995 logs.go:276] 2 containers: [fd105e227c64f742488149d7a7ef12f65582c9a17ebf40af0d31772fff23b850 0084076706589b3159aa05b37a0ef12c102644140788b191724b0f7267fd4e3c]
	I0620 18:51:16.120414  474995 ssh_runner.go:195] Run: which crictl
	I0620 18:51:16.124444  474995 ssh_runner.go:195] Run: which crictl
	I0620 18:51:16.128126  474995 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0620 18:51:16.128201  474995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0620 18:51:16.165780  474995 cri.go:89] found id: "ae831abf0d9b3aec7fd99a96e612f60ca28994573225480bd39d669115fd9323"
	I0620 18:51:16.165852  474995 cri.go:89] found id: "6ce85447f9024fd431a26a603f1a93ee2dd0f45da6feddb6ce6fee810b5fdd37"
	I0620 18:51:16.165887  474995 cri.go:89] found id: ""
	I0620 18:51:16.165913  474995 logs.go:276] 2 containers: [ae831abf0d9b3aec7fd99a96e612f60ca28994573225480bd39d669115fd9323 6ce85447f9024fd431a26a603f1a93ee2dd0f45da6feddb6ce6fee810b5fdd37]
	I0620 18:51:16.166004  474995 ssh_runner.go:195] Run: which crictl
	I0620 18:51:16.169944  474995 ssh_runner.go:195] Run: which crictl
	I0620 18:51:16.173672  474995 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0620 18:51:16.173796  474995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0620 18:51:16.209860  474995 cri.go:89] found id: "e098983ce65fd9a05371c1e9237d75438fa40f748e0a679589f472f0a39a08e8"
	I0620 18:51:16.209887  474995 cri.go:89] found id: "13a0bb253788862ae22e6058b7f190fba11dd8db1d86a9df13f8b2aeb42fcecb"
	I0620 18:51:16.209904  474995 cri.go:89] found id: ""
	I0620 18:51:16.209913  474995 logs.go:276] 2 containers: [e098983ce65fd9a05371c1e9237d75438fa40f748e0a679589f472f0a39a08e8 13a0bb253788862ae22e6058b7f190fba11dd8db1d86a9df13f8b2aeb42fcecb]
	I0620 18:51:16.210006  474995 ssh_runner.go:195] Run: which crictl
	I0620 18:51:16.214081  474995 ssh_runner.go:195] Run: which crictl
	I0620 18:51:16.217615  474995 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0620 18:51:16.217705  474995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0620 18:51:16.257685  474995 cri.go:89] found id: "9b335eeb8b67e4c38ce7a7e9a7bc9f20eb7a66185e54a8a71a8ecb76c3a9aa5f"
	I0620 18:51:16.257708  474995 cri.go:89] found id: ""
	I0620 18:51:16.257717  474995 logs.go:276] 1 containers: [9b335eeb8b67e4c38ce7a7e9a7bc9f20eb7a66185e54a8a71a8ecb76c3a9aa5f]
	I0620 18:51:16.257793  474995 ssh_runner.go:195] Run: which crictl
	I0620 18:51:16.261284  474995 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0620 18:51:16.261408  474995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0620 18:51:16.314752  474995 cri.go:89] found id: "eed6e9e374d62971b45702bee247f112b7994fb1614a6637e94d738953631a39"
	I0620 18:51:16.314778  474995 cri.go:89] found id: "2361035748bd059e7b4f11b7715ec3ccdb81adbf542d803180ebb8bec389d445"
	I0620 18:51:16.314783  474995 cri.go:89] found id: ""
	I0620 18:51:16.314790  474995 logs.go:276] 2 containers: [eed6e9e374d62971b45702bee247f112b7994fb1614a6637e94d738953631a39 2361035748bd059e7b4f11b7715ec3ccdb81adbf542d803180ebb8bec389d445]
	I0620 18:51:16.314855  474995 ssh_runner.go:195] Run: which crictl
	I0620 18:51:16.318775  474995 ssh_runner.go:195] Run: which crictl
	I0620 18:51:16.322870  474995 logs.go:123] Gathering logs for storage-provisioner [eed6e9e374d62971b45702bee247f112b7994fb1614a6637e94d738953631a39] ...
	I0620 18:51:16.322935  474995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eed6e9e374d62971b45702bee247f112b7994fb1614a6637e94d738953631a39"
	I0620 18:51:16.363609  474995 logs.go:123] Gathering logs for storage-provisioner [2361035748bd059e7b4f11b7715ec3ccdb81adbf542d803180ebb8bec389d445] ...
	I0620 18:51:16.363639  474995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2361035748bd059e7b4f11b7715ec3ccdb81adbf542d803180ebb8bec389d445"
	I0620 18:51:16.401395  474995 logs.go:123] Gathering logs for containerd ...
	I0620 18:51:16.401426  474995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0620 18:51:16.461054  474995 logs.go:123] Gathering logs for etcd [d10f227a579ece3ba7f135984b5533ab1954b830dbfcff6951b3b52ca343d8c6] ...
	I0620 18:51:16.461090  474995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d10f227a579ece3ba7f135984b5533ab1954b830dbfcff6951b3b52ca343d8c6"
	I0620 18:51:16.502734  474995 logs.go:123] Gathering logs for etcd [78a0fe9b19212a5f753e82266fc6eda58535c93f7b45a9f7ae1027d529ba1b9f] ...
	I0620 18:51:16.502766  474995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78a0fe9b19212a5f753e82266fc6eda58535c93f7b45a9f7ae1027d529ba1b9f"
	I0620 18:51:16.547328  474995 logs.go:123] Gathering logs for coredns [41ba1c27ef09f4269353d6500282257c6dc0ee5cc4906cc9d3f99fdacb2835c1] ...
	I0620 18:51:16.547358  474995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 41ba1c27ef09f4269353d6500282257c6dc0ee5cc4906cc9d3f99fdacb2835c1"
	I0620 18:51:16.586152  474995 logs.go:123] Gathering logs for kube-controller-manager [6ce85447f9024fd431a26a603f1a93ee2dd0f45da6feddb6ce6fee810b5fdd37] ...
	I0620 18:51:16.586181  474995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ce85447f9024fd431a26a603f1a93ee2dd0f45da6feddb6ce6fee810b5fdd37"
	I0620 18:51:16.641778  474995 logs.go:123] Gathering logs for kubelet ...
	I0620 18:51:16.641814  474995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0620 18:51:16.701253  474995 logs.go:138] Found kubelet problem: Jun 20 18:45:47 old-k8s-version-337794 kubelet[665]: E0620 18:45:47.184128     665 reflector.go:138] object-"kube-system"/"metrics-server-token-5jskx": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-5jskx" is forbidden: User "system:node:old-k8s-version-337794" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-337794' and this object
	W0620 18:51:16.701515  474995 logs.go:138] Found kubelet problem: Jun 20 18:45:47 old-k8s-version-337794 kubelet[665]: E0620 18:45:47.184290     665 reflector.go:138] object-"kube-system"/"kindnet-token-zqbvn": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-zqbvn" is forbidden: User "system:node:old-k8s-version-337794" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-337794' and this object
	W0620 18:51:16.701729  474995 logs.go:138] Found kubelet problem: Jun 20 18:45:47 old-k8s-version-337794 kubelet[665]: E0620 18:45:47.184370     665 reflector.go:138] object-"default"/"default-token-lw785": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-lw785" is forbidden: User "system:node:old-k8s-version-337794" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-337794' and this object
	W0620 18:51:16.701946  474995 logs.go:138] Found kubelet problem: Jun 20 18:45:47 old-k8s-version-337794 kubelet[665]: E0620 18:45:47.184441     665 reflector.go:138] object-"kube-system"/"kube-proxy-token-6scpt": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-6scpt" is forbidden: User "system:node:old-k8s-version-337794" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-337794' and this object
	W0620 18:51:16.702215  474995 logs.go:138] Found kubelet problem: Jun 20 18:45:47 old-k8s-version-337794 kubelet[665]: E0620 18:45:47.184579     665 reflector.go:138] object-"kube-system"/"storage-provisioner-token-dfq7m": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-dfq7m" is forbidden: User "system:node:old-k8s-version-337794" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-337794' and this object
	W0620 18:51:16.702446  474995 logs.go:138] Found kubelet problem: Jun 20 18:45:47 old-k8s-version-337794 kubelet[665]: E0620 18:45:47.184668     665 reflector.go:138] object-"kube-system"/"coredns-token-785nz": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-785nz" is forbidden: User "system:node:old-k8s-version-337794" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-337794' and this object
	W0620 18:51:16.702665  474995 logs.go:138] Found kubelet problem: Jun 20 18:45:47 old-k8s-version-337794 kubelet[665]: E0620 18:45:47.184747     665 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-337794" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-337794' and this object
	W0620 18:51:16.702866  474995 logs.go:138] Found kubelet problem: Jun 20 18:45:47 old-k8s-version-337794 kubelet[665]: E0620 18:45:47.184827     665 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-337794" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-337794' and this object
	W0620 18:51:16.710426  474995 logs.go:138] Found kubelet problem: Jun 20 18:45:49 old-k8s-version-337794 kubelet[665]: E0620 18:45:49.248664     665 pod_workers.go:191] Error syncing pod 8ce729fc-f814-4c22-b0f3-aeb9f468865b ("metrics-server-9975d5f86-s95qr_kube-system(8ce729fc-f814-4c22-b0f3-aeb9f468865b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0620 18:51:16.712632  474995 logs.go:138] Found kubelet problem: Jun 20 18:45:49 old-k8s-version-337794 kubelet[665]: E0620 18:45:49.863806     665 pod_workers.go:191] Error syncing pod 8ce729fc-f814-4c22-b0f3-aeb9f468865b ("metrics-server-9975d5f86-s95qr_kube-system(8ce729fc-f814-4c22-b0f3-aeb9f468865b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0620 18:51:16.715551  474995 logs.go:138] Found kubelet problem: Jun 20 18:46:03 old-k8s-version-337794 kubelet[665]: E0620 18:46:03.515344     665 pod_workers.go:191] Error syncing pod 8ce729fc-f814-4c22-b0f3-aeb9f468865b ("metrics-server-9975d5f86-s95qr_kube-system(8ce729fc-f814-4c22-b0f3-aeb9f468865b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0620 18:51:16.715982  474995 logs.go:138] Found kubelet problem: Jun 20 18:46:04 old-k8s-version-337794 kubelet[665]: E0620 18:46:04.480260     665 reflector.go:138] object-"kubernetes-dashboard"/"kubernetes-dashboard-token-nkcgq": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kubernetes-dashboard-token-nkcgq" is forbidden: User "system:node:old-k8s-version-337794" cannot list resource "secrets" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'old-k8s-version-337794' and this object
	W0620 18:51:16.718088  474995 logs.go:138] Found kubelet problem: Jun 20 18:46:11 old-k8s-version-337794 kubelet[665]: E0620 18:46:11.938046     665 pod_workers.go:191] Error syncing pod be9a2daf-d4c5-4163-815e-337ffd00fcb7 ("dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"
	W0620 18:51:16.718430  474995 logs.go:138] Found kubelet problem: Jun 20 18:46:12 old-k8s-version-337794 kubelet[665]: E0620 18:46:12.936968     665 pod_workers.go:191] Error syncing pod be9a2daf-d4c5-4163-815e-337ffd00fcb7 ("dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"
	W0620 18:51:16.718764  474995 logs.go:138] Found kubelet problem: Jun 20 18:46:13 old-k8s-version-337794 kubelet[665]: E0620 18:46:13.943072     665 pod_workers.go:191] Error syncing pod be9a2daf-d4c5-4163-815e-337ffd00fcb7 ("dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"
	W0620 18:51:16.719299  474995 logs.go:138] Found kubelet problem: Jun 20 18:46:18 old-k8s-version-337794 kubelet[665]: E0620 18:46:18.503586     665 pod_workers.go:191] Error syncing pod 8ce729fc-f814-4c22-b0f3-aeb9f468865b ("metrics-server-9975d5f86-s95qr_kube-system(8ce729fc-f814-4c22-b0f3-aeb9f468865b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0620 18:51:16.720222  474995 logs.go:138] Found kubelet problem: Jun 20 18:46:25 old-k8s-version-337794 kubelet[665]: E0620 18:46:25.975971     665 pod_workers.go:191] Error syncing pod be9a2daf-d4c5-4163-815e-337ffd00fcb7 ("dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"
	W0620 18:51:16.722681  474995 logs.go:138] Found kubelet problem: Jun 20 18:46:31 old-k8s-version-337794 kubelet[665]: E0620 18:46:31.512372     665 pod_workers.go:191] Error syncing pod 8ce729fc-f814-4c22-b0f3-aeb9f468865b ("metrics-server-9975d5f86-s95qr_kube-system(8ce729fc-f814-4c22-b0f3-aeb9f468865b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0620 18:51:16.723015  474995 logs.go:138] Found kubelet problem: Jun 20 18:46:33 old-k8s-version-337794 kubelet[665]: E0620 18:46:33.400781     665 pod_workers.go:191] Error syncing pod be9a2daf-d4c5-4163-815e-337ffd00fcb7 ("dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"
	W0620 18:51:16.723204  474995 logs.go:138] Found kubelet problem: Jun 20 18:46:45 old-k8s-version-337794 kubelet[665]: E0620 18:46:45.503187     665 pod_workers.go:191] Error syncing pod 8ce729fc-f814-4c22-b0f3-aeb9f468865b ("metrics-server-9975d5f86-s95qr_kube-system(8ce729fc-f814-4c22-b0f3-aeb9f468865b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0620 18:51:16.723792  474995 logs.go:138] Found kubelet problem: Jun 20 18:46:49 old-k8s-version-337794 kubelet[665]: E0620 18:46:49.037647     665 pod_workers.go:191] Error syncing pod be9a2daf-d4c5-4163-815e-337ffd00fcb7 ("dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"
	W0620 18:51:16.724119  474995 logs.go:138] Found kubelet problem: Jun 20 18:46:53 old-k8s-version-337794 kubelet[665]: E0620 18:46:53.401356     665 pod_workers.go:191] Error syncing pod be9a2daf-d4c5-4163-815e-337ffd00fcb7 ("dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"
	W0620 18:51:16.724304  474995 logs.go:138] Found kubelet problem: Jun 20 18:46:57 old-k8s-version-337794 kubelet[665]: E0620 18:46:57.503282     665 pod_workers.go:191] Error syncing pod 8ce729fc-f814-4c22-b0f3-aeb9f468865b ("metrics-server-9975d5f86-s95qr_kube-system(8ce729fc-f814-4c22-b0f3-aeb9f468865b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0620 18:51:16.724630  474995 logs.go:138] Found kubelet problem: Jun 20 18:47:07 old-k8s-version-337794 kubelet[665]: E0620 18:47:07.502771     665 pod_workers.go:191] Error syncing pod be9a2daf-d4c5-4163-815e-337ffd00fcb7 ("dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"
	W0620 18:51:16.727084  474995 logs.go:138] Found kubelet problem: Jun 20 18:47:12 old-k8s-version-337794 kubelet[665]: E0620 18:47:12.515321     665 pod_workers.go:191] Error syncing pod 8ce729fc-f814-4c22-b0f3-aeb9f468865b ("metrics-server-9975d5f86-s95qr_kube-system(8ce729fc-f814-4c22-b0f3-aeb9f468865b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0620 18:51:16.727450  474995 logs.go:138] Found kubelet problem: Jun 20 18:47:18 old-k8s-version-337794 kubelet[665]: E0620 18:47:18.503470     665 pod_workers.go:191] Error syncing pod be9a2daf-d4c5-4163-815e-337ffd00fcb7 ("dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"
	W0620 18:51:16.727640  474995 logs.go:138] Found kubelet problem: Jun 20 18:47:27 old-k8s-version-337794 kubelet[665]: E0620 18:47:27.503319     665 pod_workers.go:191] Error syncing pod 8ce729fc-f814-4c22-b0f3-aeb9f468865b ("metrics-server-9975d5f86-s95qr_kube-system(8ce729fc-f814-4c22-b0f3-aeb9f468865b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0620 18:51:16.728229  474995 logs.go:138] Found kubelet problem: Jun 20 18:47:33 old-k8s-version-337794 kubelet[665]: E0620 18:47:33.205812     665 pod_workers.go:191] Error syncing pod be9a2daf-d4c5-4163-815e-337ffd00fcb7 ("dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"
	W0620 18:51:16.728575  474995 logs.go:138] Found kubelet problem: Jun 20 18:47:34 old-k8s-version-337794 kubelet[665]: E0620 18:47:34.209709     665 pod_workers.go:191] Error syncing pod be9a2daf-d4c5-4163-815e-337ffd00fcb7 ("dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"
	W0620 18:51:16.728767  474995 logs.go:138] Found kubelet problem: Jun 20 18:47:41 old-k8s-version-337794 kubelet[665]: E0620 18:47:41.503093     665 pod_workers.go:191] Error syncing pod 8ce729fc-f814-4c22-b0f3-aeb9f468865b ("metrics-server-9975d5f86-s95qr_kube-system(8ce729fc-f814-4c22-b0f3-aeb9f468865b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0620 18:51:16.729094  474995 logs.go:138] Found kubelet problem: Jun 20 18:47:47 old-k8s-version-337794 kubelet[665]: E0620 18:47:47.502809     665 pod_workers.go:191] Error syncing pod be9a2daf-d4c5-4163-815e-337ffd00fcb7 ("dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"
	W0620 18:51:16.729277  474995 logs.go:138] Found kubelet problem: Jun 20 18:47:56 old-k8s-version-337794 kubelet[665]: E0620 18:47:56.504347     665 pod_workers.go:191] Error syncing pod 8ce729fc-f814-4c22-b0f3-aeb9f468865b ("metrics-server-9975d5f86-s95qr_kube-system(8ce729fc-f814-4c22-b0f3-aeb9f468865b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0620 18:51:16.729606  474995 logs.go:138] Found kubelet problem: Jun 20 18:48:02 old-k8s-version-337794 kubelet[665]: E0620 18:48:02.503257     665 pod_workers.go:191] Error syncing pod be9a2daf-d4c5-4163-815e-337ffd00fcb7 ("dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"
	W0620 18:51:16.729797  474995 logs.go:138] Found kubelet problem: Jun 20 18:48:09 old-k8s-version-337794 kubelet[665]: E0620 18:48:09.503212     665 pod_workers.go:191] Error syncing pod 8ce729fc-f814-4c22-b0f3-aeb9f468865b ("metrics-server-9975d5f86-s95qr_kube-system(8ce729fc-f814-4c22-b0f3-aeb9f468865b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0620 18:51:16.730123  474995 logs.go:138] Found kubelet problem: Jun 20 18:48:14 old-k8s-version-337794 kubelet[665]: E0620 18:48:14.504569     665 pod_workers.go:191] Error syncing pod be9a2daf-d4c5-4163-815e-337ffd00fcb7 ("dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"
	W0620 18:51:16.730308  474995 logs.go:138] Found kubelet problem: Jun 20 18:48:24 old-k8s-version-337794 kubelet[665]: E0620 18:48:24.506143     665 pod_workers.go:191] Error syncing pod 8ce729fc-f814-4c22-b0f3-aeb9f468865b ("metrics-server-9975d5f86-s95qr_kube-system(8ce729fc-f814-4c22-b0f3-aeb9f468865b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0620 18:51:16.730637  474995 logs.go:138] Found kubelet problem: Jun 20 18:48:26 old-k8s-version-337794 kubelet[665]: E0620 18:48:26.503482     665 pod_workers.go:191] Error syncing pod be9a2daf-d4c5-4163-815e-337ffd00fcb7 ("dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"
	W0620 18:51:16.730967  474995 logs.go:138] Found kubelet problem: Jun 20 18:48:37 old-k8s-version-337794 kubelet[665]: E0620 18:48:37.502884     665 pod_workers.go:191] Error syncing pod be9a2daf-d4c5-4163-815e-337ffd00fcb7 ("dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"
	W0620 18:51:16.733454  474995 logs.go:138] Found kubelet problem: Jun 20 18:48:38 old-k8s-version-337794 kubelet[665]: E0620 18:48:38.511247     665 pod_workers.go:191] Error syncing pod 8ce729fc-f814-4c22-b0f3-aeb9f468865b ("metrics-server-9975d5f86-s95qr_kube-system(8ce729fc-f814-4c22-b0f3-aeb9f468865b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0620 18:51:16.733791  474995 logs.go:138] Found kubelet problem: Jun 20 18:48:48 old-k8s-version-337794 kubelet[665]: E0620 18:48:48.504010     665 pod_workers.go:191] Error syncing pod be9a2daf-d4c5-4163-815e-337ffd00fcb7 ("dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"
	W0620 18:51:16.733979  474995 logs.go:138] Found kubelet problem: Jun 20 18:48:50 old-k8s-version-337794 kubelet[665]: E0620 18:48:50.506025     665 pod_workers.go:191] Error syncing pod 8ce729fc-f814-4c22-b0f3-aeb9f468865b ("metrics-server-9975d5f86-s95qr_kube-system(8ce729fc-f814-4c22-b0f3-aeb9f468865b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0620 18:51:16.734596  474995 logs.go:138] Found kubelet problem: Jun 20 18:49:01 old-k8s-version-337794 kubelet[665]: E0620 18:49:01.386876     665 pod_workers.go:191] Error syncing pod be9a2daf-d4c5-4163-815e-337ffd00fcb7 ("dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"
	W0620 18:51:16.734926  474995 logs.go:138] Found kubelet problem: Jun 20 18:49:03 old-k8s-version-337794 kubelet[665]: E0620 18:49:03.401266     665 pod_workers.go:191] Error syncing pod be9a2daf-d4c5-4163-815e-337ffd00fcb7 ("dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"
	W0620 18:51:16.735125  474995 logs.go:138] Found kubelet problem: Jun 20 18:49:03 old-k8s-version-337794 kubelet[665]: E0620 18:49:03.503351     665 pod_workers.go:191] Error syncing pod 8ce729fc-f814-4c22-b0f3-aeb9f468865b ("metrics-server-9975d5f86-s95qr_kube-system(8ce729fc-f814-4c22-b0f3-aeb9f468865b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0620 18:51:16.735310  474995 logs.go:138] Found kubelet problem: Jun 20 18:49:16 old-k8s-version-337794 kubelet[665]: E0620 18:49:16.506224     665 pod_workers.go:191] Error syncing pod 8ce729fc-f814-4c22-b0f3-aeb9f468865b ("metrics-server-9975d5f86-s95qr_kube-system(8ce729fc-f814-4c22-b0f3-aeb9f468865b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0620 18:51:16.735639  474995 logs.go:138] Found kubelet problem: Jun 20 18:49:17 old-k8s-version-337794 kubelet[665]: E0620 18:49:17.502948     665 pod_workers.go:191] Error syncing pod be9a2daf-d4c5-4163-815e-337ffd00fcb7 ("dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"
	W0620 18:51:16.735965  474995 logs.go:138] Found kubelet problem: Jun 20 18:49:28 old-k8s-version-337794 kubelet[665]: E0620 18:49:28.505091     665 pod_workers.go:191] Error syncing pod be9a2daf-d4c5-4163-815e-337ffd00fcb7 ("dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"
	W0620 18:51:16.736154  474995 logs.go:138] Found kubelet problem: Jun 20 18:49:29 old-k8s-version-337794 kubelet[665]: E0620 18:49:29.503220     665 pod_workers.go:191] Error syncing pod 8ce729fc-f814-4c22-b0f3-aeb9f468865b ("metrics-server-9975d5f86-s95qr_kube-system(8ce729fc-f814-4c22-b0f3-aeb9f468865b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0620 18:51:16.736482  474995 logs.go:138] Found kubelet problem: Jun 20 18:49:39 old-k8s-version-337794 kubelet[665]: E0620 18:49:39.503259     665 pod_workers.go:191] Error syncing pod be9a2daf-d4c5-4163-815e-337ffd00fcb7 ("dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"
	W0620 18:51:16.736666  474995 logs.go:138] Found kubelet problem: Jun 20 18:49:44 old-k8s-version-337794 kubelet[665]: E0620 18:49:44.503505     665 pod_workers.go:191] Error syncing pod 8ce729fc-f814-4c22-b0f3-aeb9f468865b ("metrics-server-9975d5f86-s95qr_kube-system(8ce729fc-f814-4c22-b0f3-aeb9f468865b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0620 18:51:16.736992  474995 logs.go:138] Found kubelet problem: Jun 20 18:49:52 old-k8s-version-337794 kubelet[665]: E0620 18:49:52.504696     665 pod_workers.go:191] Error syncing pod be9a2daf-d4c5-4163-815e-337ffd00fcb7 ("dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"
	W0620 18:51:16.737176  474995 logs.go:138] Found kubelet problem: Jun 20 18:49:57 old-k8s-version-337794 kubelet[665]: E0620 18:49:57.503168     665 pod_workers.go:191] Error syncing pod 8ce729fc-f814-4c22-b0f3-aeb9f468865b ("metrics-server-9975d5f86-s95qr_kube-system(8ce729fc-f814-4c22-b0f3-aeb9f468865b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0620 18:51:16.737508  474995 logs.go:138] Found kubelet problem: Jun 20 18:50:04 old-k8s-version-337794 kubelet[665]: E0620 18:50:04.503416     665 pod_workers.go:191] Error syncing pod be9a2daf-d4c5-4163-815e-337ffd00fcb7 ("dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"
	W0620 18:51:16.737691  474995 logs.go:138] Found kubelet problem: Jun 20 18:50:08 old-k8s-version-337794 kubelet[665]: E0620 18:50:08.504863     665 pod_workers.go:191] Error syncing pod 8ce729fc-f814-4c22-b0f3-aeb9f468865b ("metrics-server-9975d5f86-s95qr_kube-system(8ce729fc-f814-4c22-b0f3-aeb9f468865b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0620 18:51:16.738018  474995 logs.go:138] Found kubelet problem: Jun 20 18:50:16 old-k8s-version-337794 kubelet[665]: E0620 18:50:16.503299     665 pod_workers.go:191] Error syncing pod be9a2daf-d4c5-4163-815e-337ffd00fcb7 ("dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"
	W0620 18:51:16.738204  474995 logs.go:138] Found kubelet problem: Jun 20 18:50:19 old-k8s-version-337794 kubelet[665]: E0620 18:50:19.503499     665 pod_workers.go:191] Error syncing pod 8ce729fc-f814-4c22-b0f3-aeb9f468865b ("metrics-server-9975d5f86-s95qr_kube-system(8ce729fc-f814-4c22-b0f3-aeb9f468865b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0620 18:51:16.738532  474995 logs.go:138] Found kubelet problem: Jun 20 18:50:30 old-k8s-version-337794 kubelet[665]: E0620 18:50:30.503459     665 pod_workers.go:191] Error syncing pod be9a2daf-d4c5-4163-815e-337ffd00fcb7 ("dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"
	W0620 18:51:16.738717  474995 logs.go:138] Found kubelet problem: Jun 20 18:50:34 old-k8s-version-337794 kubelet[665]: E0620 18:50:34.504128     665 pod_workers.go:191] Error syncing pod 8ce729fc-f814-4c22-b0f3-aeb9f468865b ("metrics-server-9975d5f86-s95qr_kube-system(8ce729fc-f814-4c22-b0f3-aeb9f468865b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0620 18:51:16.739053  474995 logs.go:138] Found kubelet problem: Jun 20 18:50:44 old-k8s-version-337794 kubelet[665]: E0620 18:50:44.503802     665 pod_workers.go:191] Error syncing pod be9a2daf-d4c5-4163-815e-337ffd00fcb7 ("dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"
	W0620 18:51:16.739240  474995 logs.go:138] Found kubelet problem: Jun 20 18:50:48 old-k8s-version-337794 kubelet[665]: E0620 18:50:48.503323     665 pod_workers.go:191] Error syncing pod 8ce729fc-f814-4c22-b0f3-aeb9f468865b ("metrics-server-9975d5f86-s95qr_kube-system(8ce729fc-f814-4c22-b0f3-aeb9f468865b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0620 18:51:16.739566  474995 logs.go:138] Found kubelet problem: Jun 20 18:50:55 old-k8s-version-337794 kubelet[665]: E0620 18:50:55.503245     665 pod_workers.go:191] Error syncing pod be9a2daf-d4c5-4163-815e-337ffd00fcb7 ("dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"
	W0620 18:51:16.739752  474995 logs.go:138] Found kubelet problem: Jun 20 18:51:03 old-k8s-version-337794 kubelet[665]: E0620 18:51:03.503156     665 pod_workers.go:191] Error syncing pod 8ce729fc-f814-4c22-b0f3-aeb9f468865b ("metrics-server-9975d5f86-s95qr_kube-system(8ce729fc-f814-4c22-b0f3-aeb9f468865b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0620 18:51:16.740079  474995 logs.go:138] Found kubelet problem: Jun 20 18:51:08 old-k8s-version-337794 kubelet[665]: E0620 18:51:08.506428     665 pod_workers.go:191] Error syncing pod be9a2daf-d4c5-4163-815e-337ffd00fcb7 ("dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"
	I0620 18:51:16.740090  474995 logs.go:123] Gathering logs for dmesg ...
	I0620 18:51:16.740104  474995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0620 18:51:16.769562  474995 logs.go:123] Gathering logs for kube-apiserver [cfdcf7bafc98db0ed8b117a7f81af17f4aa0ffe0d900af8fb469323b24ada207] ...
	I0620 18:51:16.769593  474995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cfdcf7bafc98db0ed8b117a7f81af17f4aa0ffe0d900af8fb469323b24ada207"
	I0620 18:51:16.863581  474995 logs.go:123] Gathering logs for coredns [3b0caff456f550681c8dc6f7ce0c9af69317ec12ac39d8a6e1fa2040ea4c8a50] ...
	I0620 18:51:16.863616  474995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b0caff456f550681c8dc6f7ce0c9af69317ec12ac39d8a6e1fa2040ea4c8a50"
	I0620 18:51:16.908011  474995 logs.go:123] Gathering logs for container status ...
	I0620 18:51:16.908041  474995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0620 18:51:16.960275  474995 logs.go:123] Gathering logs for describe nodes ...
	I0620 18:51:16.960331  474995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0620 18:51:17.133154  474995 logs.go:123] Gathering logs for kube-apiserver [ab9a5dc87140998b57eded669bdfe78785f54034fe3076c3679faee0c5e8b168] ...
	I0620 18:51:17.133189  474995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab9a5dc87140998b57eded669bdfe78785f54034fe3076c3679faee0c5e8b168"
	I0620 18:51:17.196301  474995 logs.go:123] Gathering logs for kube-scheduler [e895300ffd7e6f784d9567baacfb9b233853b31245990e391675289e62ad7659] ...
	I0620 18:51:17.196379  474995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e895300ffd7e6f784d9567baacfb9b233853b31245990e391675289e62ad7659"
	I0620 18:51:17.247573  474995 logs.go:123] Gathering logs for kindnet [13a0bb253788862ae22e6058b7f190fba11dd8db1d86a9df13f8b2aeb42fcecb] ...
	I0620 18:51:17.247606  474995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13a0bb253788862ae22e6058b7f190fba11dd8db1d86a9df13f8b2aeb42fcecb"
	I0620 18:51:17.288206  474995 logs.go:123] Gathering logs for kindnet [e098983ce65fd9a05371c1e9237d75438fa40f748e0a679589f472f0a39a08e8] ...
	I0620 18:51:17.288236  474995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e098983ce65fd9a05371c1e9237d75438fa40f748e0a679589f472f0a39a08e8"
	I0620 18:51:17.334130  474995 logs.go:123] Gathering logs for kubernetes-dashboard [9b335eeb8b67e4c38ce7a7e9a7bc9f20eb7a66185e54a8a71a8ecb76c3a9aa5f] ...
	I0620 18:51:17.334171  474995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9b335eeb8b67e4c38ce7a7e9a7bc9f20eb7a66185e54a8a71a8ecb76c3a9aa5f"
	I0620 18:51:17.377828  474995 logs.go:123] Gathering logs for kube-scheduler [216bb91d7edcfdb7e26e452ac2980c11fb6d169bc9ec8ff9d4125cefc89b4af8] ...
	I0620 18:51:17.377858  474995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 216bb91d7edcfdb7e26e452ac2980c11fb6d169bc9ec8ff9d4125cefc89b4af8"
	I0620 18:51:17.422741  474995 logs.go:123] Gathering logs for kube-proxy [fd105e227c64f742488149d7a7ef12f65582c9a17ebf40af0d31772fff23b850] ...
	I0620 18:51:17.422816  474995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd105e227c64f742488149d7a7ef12f65582c9a17ebf40af0d31772fff23b850"
	I0620 18:51:17.464495  474995 logs.go:123] Gathering logs for kube-proxy [0084076706589b3159aa05b37a0ef12c102644140788b191724b0f7267fd4e3c] ...
	I0620 18:51:17.464523  474995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0084076706589b3159aa05b37a0ef12c102644140788b191724b0f7267fd4e3c"
	I0620 18:51:17.502232  474995 logs.go:123] Gathering logs for kube-controller-manager [ae831abf0d9b3aec7fd99a96e612f60ca28994573225480bd39d669115fd9323] ...
	I0620 18:51:17.502300  474995 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae831abf0d9b3aec7fd99a96e612f60ca28994573225480bd39d669115fd9323"
	I0620 18:51:17.572024  474995 out.go:304] Setting ErrFile to fd 2...
	I0620 18:51:17.572055  474995 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0620 18:51:17.572109  474995 out.go:239] X Problems detected in kubelet:
	W0620 18:51:17.572123  474995 out.go:239]   Jun 20 18:50:44 old-k8s-version-337794 kubelet[665]: E0620 18:50:44.503802     665 pod_workers.go:191] Error syncing pod be9a2daf-d4c5-4163-815e-337ffd00fcb7 ("dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"
	W0620 18:51:17.572131  474995 out.go:239]   Jun 20 18:50:48 old-k8s-version-337794 kubelet[665]: E0620 18:50:48.503323     665 pod_workers.go:191] Error syncing pod 8ce729fc-f814-4c22-b0f3-aeb9f468865b ("metrics-server-9975d5f86-s95qr_kube-system(8ce729fc-f814-4c22-b0f3-aeb9f468865b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0620 18:51:17.572147  474995 out.go:239]   Jun 20 18:50:55 old-k8s-version-337794 kubelet[665]: E0620 18:50:55.503245     665 pod_workers.go:191] Error syncing pod be9a2daf-d4c5-4163-815e-337ffd00fcb7 ("dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"
	W0620 18:51:17.572154  474995 out.go:239]   Jun 20 18:51:03 old-k8s-version-337794 kubelet[665]: E0620 18:51:03.503156     665 pod_workers.go:191] Error syncing pod 8ce729fc-f814-4c22-b0f3-aeb9f468865b ("metrics-server-9975d5f86-s95qr_kube-system(8ce729fc-f814-4c22-b0f3-aeb9f468865b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0620 18:51:17.572166  474995 out.go:239]   Jun 20 18:51:08 old-k8s-version-337794 kubelet[665]: E0620 18:51:08.506428     665 pod_workers.go:191] Error syncing pod be9a2daf-d4c5-4163-815e-337ffd00fcb7 ("dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"
	I0620 18:51:17.572172  474995 out.go:304] Setting ErrFile to fd 2...
	I0620 18:51:17.572181  474995 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0620 18:51:18.854013  480164 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0620 18:51:18.861790  480164 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0620 18:51:18.862908  480164 api_server.go:141] control plane version: v1.30.2
	I0620 18:51:18.862959  480164 api_server.go:131] duration metric: took 11.646991045s to wait for apiserver health ...
	I0620 18:51:18.862969  480164 system_pods.go:43] waiting for kube-system pods to appear ...
	I0620 18:51:18.862994  480164 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0620 18:51:18.863128  480164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0620 18:51:18.905252  480164 cri.go:89] found id: "b4ff8b0c606a7e7063350d5957b42f0bc17dbffdfa353b2789e75a300d6e4d87"
	I0620 18:51:18.905275  480164 cri.go:89] found id: "32128302a97d35fac10fa7e033950a439ae38177c2719d53e457c20c630b1680"
	I0620 18:51:18.905281  480164 cri.go:89] found id: ""
	I0620 18:51:18.905289  480164 logs.go:276] 2 containers: [b4ff8b0c606a7e7063350d5957b42f0bc17dbffdfa353b2789e75a300d6e4d87 32128302a97d35fac10fa7e033950a439ae38177c2719d53e457c20c630b1680]
	I0620 18:51:18.905343  480164 ssh_runner.go:195] Run: which crictl
	I0620 18:51:18.909587  480164 ssh_runner.go:195] Run: which crictl
	I0620 18:51:18.913476  480164 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0620 18:51:18.913592  480164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0620 18:51:18.952864  480164 cri.go:89] found id: "7f8a08978329e7c1a1ec083b484e7f92ea8e21d7f9739a5b8c6027567d5c9aa8"
	I0620 18:51:18.952889  480164 cri.go:89] found id: "b518668f2794393f0d52a9522913041022eefae75705e048fb7ac0b731564f95"
	I0620 18:51:18.952894  480164 cri.go:89] found id: ""
	I0620 18:51:18.952901  480164 logs.go:276] 2 containers: [7f8a08978329e7c1a1ec083b484e7f92ea8e21d7f9739a5b8c6027567d5c9aa8 b518668f2794393f0d52a9522913041022eefae75705e048fb7ac0b731564f95]
	I0620 18:51:18.952966  480164 ssh_runner.go:195] Run: which crictl
	I0620 18:51:18.956629  480164 ssh_runner.go:195] Run: which crictl
	I0620 18:51:18.960147  480164 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0620 18:51:18.960224  480164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0620 18:51:18.997449  480164 cri.go:89] found id: "ca1e287c07f31d6bef3a1d780045b3a8e7710459444cfd0dc5aec40bdf817849"
	I0620 18:51:18.997472  480164 cri.go:89] found id: "f76e543336bf1cd1f5b883f0e280d4e501b27ffbda9d7f18459dd1225151e79d"
	I0620 18:51:18.997477  480164 cri.go:89] found id: ""
	I0620 18:51:18.997485  480164 logs.go:276] 2 containers: [ca1e287c07f31d6bef3a1d780045b3a8e7710459444cfd0dc5aec40bdf817849 f76e543336bf1cd1f5b883f0e280d4e501b27ffbda9d7f18459dd1225151e79d]
	I0620 18:51:18.997542  480164 ssh_runner.go:195] Run: which crictl
	I0620 18:51:19.002711  480164 ssh_runner.go:195] Run: which crictl
	I0620 18:51:19.007321  480164 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0620 18:51:19.007421  480164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0620 18:51:19.049402  480164 cri.go:89] found id: "7c9c3650337a9ad4c606fb7de7c270bc8a8cb013089e2b3a5447927a1ac37a66"
	I0620 18:51:19.049425  480164 cri.go:89] found id: "cb011b11c097c5a2e23ed0fb6b90fc4948c0a623b247d45e79a18c80783a4708"
	I0620 18:51:19.049430  480164 cri.go:89] found id: ""
	I0620 18:51:19.049437  480164 logs.go:276] 2 containers: [7c9c3650337a9ad4c606fb7de7c270bc8a8cb013089e2b3a5447927a1ac37a66 cb011b11c097c5a2e23ed0fb6b90fc4948c0a623b247d45e79a18c80783a4708]
	I0620 18:51:19.049495  480164 ssh_runner.go:195] Run: which crictl
	I0620 18:51:19.053789  480164 ssh_runner.go:195] Run: which crictl
	I0620 18:51:19.058363  480164 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0620 18:51:19.058438  480164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0620 18:51:19.104203  480164 cri.go:89] found id: "9fd9f632d01d3c34e1da4a8764e09e08aa6230252352139f058cd773a2d7b242"
	I0620 18:51:19.104227  480164 cri.go:89] found id: "41587230eb876d5c499563d181931de04c95ecb7bdbe809d3e232e7d942a4eea"
	I0620 18:51:19.104232  480164 cri.go:89] found id: ""
	I0620 18:51:19.104240  480164 logs.go:276] 2 containers: [9fd9f632d01d3c34e1da4a8764e09e08aa6230252352139f058cd773a2d7b242 41587230eb876d5c499563d181931de04c95ecb7bdbe809d3e232e7d942a4eea]
	I0620 18:51:19.104297  480164 ssh_runner.go:195] Run: which crictl
	I0620 18:51:19.108093  480164 ssh_runner.go:195] Run: which crictl
	I0620 18:51:19.111952  480164 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0620 18:51:19.112039  480164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0620 18:51:19.151794  480164 cri.go:89] found id: "299b91aff4c86ae8dc85313f36a5f9dc38efbd838ecfd8c78040a9d648044751"
	I0620 18:51:19.151867  480164 cri.go:89] found id: "5806db24329a0a8150b8f7815a0272af79e74d2760f6a31d23460e5328e7cb1b"
	I0620 18:51:19.151886  480164 cri.go:89] found id: ""
	I0620 18:51:19.151907  480164 logs.go:276] 2 containers: [299b91aff4c86ae8dc85313f36a5f9dc38efbd838ecfd8c78040a9d648044751 5806db24329a0a8150b8f7815a0272af79e74d2760f6a31d23460e5328e7cb1b]
	I0620 18:51:19.151993  480164 ssh_runner.go:195] Run: which crictl
	I0620 18:51:19.156378  480164 ssh_runner.go:195] Run: which crictl
	I0620 18:51:19.160132  480164 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0620 18:51:19.160250  480164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0620 18:51:19.204025  480164 cri.go:89] found id: "1a4909130a39c512a4d52fe89bf1c437e8a29f1d6cd2874c81bd4302ad52a1db"
	I0620 18:51:19.204056  480164 cri.go:89] found id: "3dae3d565289125ddcc1df2e658f60655cd8373cd01a1b71c4f6b7feeaf3b7ab"
	I0620 18:51:19.204061  480164 cri.go:89] found id: ""
	I0620 18:51:19.204069  480164 logs.go:276] 2 containers: [1a4909130a39c512a4d52fe89bf1c437e8a29f1d6cd2874c81bd4302ad52a1db 3dae3d565289125ddcc1df2e658f60655cd8373cd01a1b71c4f6b7feeaf3b7ab]
	I0620 18:51:19.204146  480164 ssh_runner.go:195] Run: which crictl
	I0620 18:51:19.208165  480164 ssh_runner.go:195] Run: which crictl
	I0620 18:51:19.212055  480164 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0620 18:51:19.212176  480164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0620 18:51:19.251926  480164 cri.go:89] found id: "2ab864d8d9dac64f868645147111f7e876b80bea9d07a7cce1391ea1d8f0b803"
	I0620 18:51:19.251951  480164 cri.go:89] found id: ""
	I0620 18:51:19.251959  480164 logs.go:276] 1 containers: [2ab864d8d9dac64f868645147111f7e876b80bea9d07a7cce1391ea1d8f0b803]
	I0620 18:51:19.252040  480164 ssh_runner.go:195] Run: which crictl
	I0620 18:51:19.255557  480164 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0620 18:51:19.255629  480164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0620 18:51:19.294420  480164 cri.go:89] found id: "3a5532e08afdf23ab38d0c36d399d912a8e9b0a53300a49eb09d3a8a89eadc32"
	I0620 18:51:19.294442  480164 cri.go:89] found id: "8e0fbeb1faaf04145fb91474a4fefe89d03863858960210a2d6fe1c0470e27b2"
	I0620 18:51:19.294447  480164 cri.go:89] found id: ""
	I0620 18:51:19.294454  480164 logs.go:276] 2 containers: [3a5532e08afdf23ab38d0c36d399d912a8e9b0a53300a49eb09d3a8a89eadc32 8e0fbeb1faaf04145fb91474a4fefe89d03863858960210a2d6fe1c0470e27b2]
	I0620 18:51:19.294513  480164 ssh_runner.go:195] Run: which crictl
	I0620 18:51:19.300064  480164 ssh_runner.go:195] Run: which crictl
	I0620 18:51:19.304577  480164 logs.go:123] Gathering logs for describe nodes ...
	I0620 18:51:19.304605  480164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0620 18:51:19.436766  480164 logs.go:123] Gathering logs for kube-controller-manager [299b91aff4c86ae8dc85313f36a5f9dc38efbd838ecfd8c78040a9d648044751] ...
	I0620 18:51:19.436799  480164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 299b91aff4c86ae8dc85313f36a5f9dc38efbd838ecfd8c78040a9d648044751"
	I0620 18:51:19.520151  480164 logs.go:123] Gathering logs for kubernetes-dashboard [2ab864d8d9dac64f868645147111f7e876b80bea9d07a7cce1391ea1d8f0b803] ...
	I0620 18:51:19.520187  480164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ab864d8d9dac64f868645147111f7e876b80bea9d07a7cce1391ea1d8f0b803"
	I0620 18:51:19.567588  480164 logs.go:123] Gathering logs for container status ...
	I0620 18:51:19.567616  480164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0620 18:51:19.614313  480164 logs.go:123] Gathering logs for kube-proxy [41587230eb876d5c499563d181931de04c95ecb7bdbe809d3e232e7d942a4eea] ...
	I0620 18:51:19.614344  480164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 41587230eb876d5c499563d181931de04c95ecb7bdbe809d3e232e7d942a4eea"
	I0620 18:51:19.660535  480164 logs.go:123] Gathering logs for kube-controller-manager [5806db24329a0a8150b8f7815a0272af79e74d2760f6a31d23460e5328e7cb1b] ...
	I0620 18:51:19.660565  480164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5806db24329a0a8150b8f7815a0272af79e74d2760f6a31d23460e5328e7cb1b"
	I0620 18:51:19.734824  480164 logs.go:123] Gathering logs for kindnet [3dae3d565289125ddcc1df2e658f60655cd8373cd01a1b71c4f6b7feeaf3b7ab] ...
	I0620 18:51:19.734865  480164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3dae3d565289125ddcc1df2e658f60655cd8373cd01a1b71c4f6b7feeaf3b7ab"
	I0620 18:51:19.781675  480164 logs.go:123] Gathering logs for kube-apiserver [b4ff8b0c606a7e7063350d5957b42f0bc17dbffdfa353b2789e75a300d6e4d87] ...
	I0620 18:51:19.781705  480164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4ff8b0c606a7e7063350d5957b42f0bc17dbffdfa353b2789e75a300d6e4d87"
	I0620 18:51:19.849931  480164 logs.go:123] Gathering logs for kube-apiserver [32128302a97d35fac10fa7e033950a439ae38177c2719d53e457c20c630b1680] ...
	I0620 18:51:19.849964  480164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 32128302a97d35fac10fa7e033950a439ae38177c2719d53e457c20c630b1680"
	I0620 18:51:19.913677  480164 logs.go:123] Gathering logs for etcd [b518668f2794393f0d52a9522913041022eefae75705e048fb7ac0b731564f95] ...
	I0620 18:51:19.913758  480164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b518668f2794393f0d52a9522913041022eefae75705e048fb7ac0b731564f95"
	I0620 18:51:19.963533  480164 logs.go:123] Gathering logs for coredns [f76e543336bf1cd1f5b883f0e280d4e501b27ffbda9d7f18459dd1225151e79d] ...
	I0620 18:51:19.963602  480164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f76e543336bf1cd1f5b883f0e280d4e501b27ffbda9d7f18459dd1225151e79d"
	I0620 18:51:20.016776  480164 logs.go:123] Gathering logs for kube-proxy [9fd9f632d01d3c34e1da4a8764e09e08aa6230252352139f058cd773a2d7b242] ...
	I0620 18:51:20.016803  480164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9fd9f632d01d3c34e1da4a8764e09e08aa6230252352139f058cd773a2d7b242"
	I0620 18:51:20.066359  480164 logs.go:123] Gathering logs for storage-provisioner [8e0fbeb1faaf04145fb91474a4fefe89d03863858960210a2d6fe1c0470e27b2] ...
	I0620 18:51:20.066390  480164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e0fbeb1faaf04145fb91474a4fefe89d03863858960210a2d6fe1c0470e27b2"
	I0620 18:51:20.108575  480164 logs.go:123] Gathering logs for containerd ...
	I0620 18:51:20.108604  480164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0620 18:51:20.171102  480164 logs.go:123] Gathering logs for etcd [7f8a08978329e7c1a1ec083b484e7f92ea8e21d7f9739a5b8c6027567d5c9aa8] ...
	I0620 18:51:20.171139  480164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7f8a08978329e7c1a1ec083b484e7f92ea8e21d7f9739a5b8c6027567d5c9aa8"
	I0620 18:51:20.222078  480164 logs.go:123] Gathering logs for coredns [ca1e287c07f31d6bef3a1d780045b3a8e7710459444cfd0dc5aec40bdf817849] ...
	I0620 18:51:20.222111  480164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca1e287c07f31d6bef3a1d780045b3a8e7710459444cfd0dc5aec40bdf817849"
	I0620 18:51:20.262479  480164 logs.go:123] Gathering logs for kube-scheduler [cb011b11c097c5a2e23ed0fb6b90fc4948c0a623b247d45e79a18c80783a4708] ...
	I0620 18:51:20.262506  480164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb011b11c097c5a2e23ed0fb6b90fc4948c0a623b247d45e79a18c80783a4708"
	I0620 18:51:20.305750  480164 logs.go:123] Gathering logs for kindnet [1a4909130a39c512a4d52fe89bf1c437e8a29f1d6cd2874c81bd4302ad52a1db] ...
	I0620 18:51:20.305777  480164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a4909130a39c512a4d52fe89bf1c437e8a29f1d6cd2874c81bd4302ad52a1db"
	I0620 18:51:20.344767  480164 logs.go:123] Gathering logs for storage-provisioner [3a5532e08afdf23ab38d0c36d399d912a8e9b0a53300a49eb09d3a8a89eadc32] ...
	I0620 18:51:20.344792  480164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a5532e08afdf23ab38d0c36d399d912a8e9b0a53300a49eb09d3a8a89eadc32"
	I0620 18:51:20.388247  480164 logs.go:123] Gathering logs for kubelet ...
	I0620 18:51:20.388275  480164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0620 18:51:20.432523  480164 logs.go:138] Found kubelet problem: Jun 20 18:46:59 no-preload-530880 kubelet[658]: W0620 18:46:59.268543     658 reflector.go:547] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-530880" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-530880' and this object
	W0620 18:51:20.432783  480164 logs.go:138] Found kubelet problem: Jun 20 18:46:59 no-preload-530880 kubelet[658]: E0620 18:46:59.268604     658 reflector.go:150] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-530880" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-530880' and this object
	I0620 18:51:20.464406  480164 logs.go:123] Gathering logs for dmesg ...
	I0620 18:51:20.464444  480164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0620 18:51:20.483366  480164 logs.go:123] Gathering logs for kube-scheduler [7c9c3650337a9ad4c606fb7de7c270bc8a8cb013089e2b3a5447927a1ac37a66] ...
	I0620 18:51:20.483395  480164 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c9c3650337a9ad4c606fb7de7c270bc8a8cb013089e2b3a5447927a1ac37a66"
	I0620 18:51:20.538054  480164 out.go:304] Setting ErrFile to fd 2...
	I0620 18:51:20.538082  480164 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0620 18:51:20.538158  480164 out.go:239] X Problems detected in kubelet:
	W0620 18:51:20.538170  480164 out.go:239]   Jun 20 18:46:59 no-preload-530880 kubelet[658]: W0620 18:46:59.268543     658 reflector.go:547] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-530880" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-530880' and this object
	W0620 18:51:20.538196  480164 out.go:239]   Jun 20 18:46:59 no-preload-530880 kubelet[658]: E0620 18:46:59.268604     658 reflector.go:150] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-530880" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-530880' and this object
	I0620 18:51:20.538211  480164 out.go:304] Setting ErrFile to fd 2...
	I0620 18:51:20.538230  480164 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0620 18:51:27.573431  474995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0620 18:51:27.584997  474995 api_server.go:72] duration metric: took 6m1.155713275s to wait for apiserver process to appear ...
	I0620 18:51:27.585025  474995 api_server.go:88] waiting for apiserver healthz status ...
	I0620 18:51:27.587901  474995 out.go:177] 
	W0620 18:51:27.590132  474995 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: cluster wait timed out during healthz check
	W0620 18:51:27.590150  474995 out.go:239] * 
	W0620 18:51:27.591168  474995 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0620 18:51:27.593949  474995 out.go:177] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	de5c94d9a8499       523cad1a4df73       2 minutes ago       Exited              dashboard-metrics-scraper   5                   8034c0f16fc73       dashboard-metrics-scraper-8d5bb5db8-kwshm
	9b335eeb8b67e       20b332c9a70d8       5 minutes ago       Running             kubernetes-dashboard        0                   6e14806468b4f       kubernetes-dashboard-cd95d586-9j6rr
	eed6e9e374d62       ba04bb24b9575       5 minutes ago       Running             storage-provisioner         2                   b938130bbf205       storage-provisioner
	3b0caff456f55       db91994f4ee8f       5 minutes ago       Running             coredns                     1                   6378a2b4d34dc       coredns-74ff55c5b-2mwgh
	267a6a02854d6       1611cd07b61d5       5 minutes ago       Running             busybox                     1                   c95d88b729d4a       busybox
	e098983ce65fd       89d73d416b992       5 minutes ago       Running             kindnet-cni                 1                   9f9966d7b9364       kindnet-gkbqq
	fd105e227c64f       25a5233254979       5 minutes ago       Running             kube-proxy                  1                   31893a86e9df6       kube-proxy-h4r8m
	ab9a5dc871409       2c08bbbc02d3a       5 minutes ago       Running             kube-apiserver              1                   138e9be90db79       kube-apiserver-old-k8s-version-337794
	ae831abf0d9b3       1df8a2b116bd1       5 minutes ago       Running             kube-controller-manager     1                   087629d82f14b       kube-controller-manager-old-k8s-version-337794
	216bb91d7edcf       e7605f88f17d6       5 minutes ago       Running             kube-scheduler              1                   5f5867a3a9371       kube-scheduler-old-k8s-version-337794
	d10f227a579ec       05b738aa1bc63       5 minutes ago       Running             etcd                        1                   7e9b1074b9fa5       etcd-old-k8s-version-337794
	229f21cd037f0       1611cd07b61d5       6 minutes ago       Exited              busybox                     0                   ee2b135781bcb       busybox
	2361035748bd0       ba04bb24b9575       7 minutes ago       Exited              storage-provisioner         1                   f06d6e1cc8be2       storage-provisioner
	41ba1c27ef09f       db91994f4ee8f       8 minutes ago       Exited              coredns                     0                   1edcdb1e91ef5       coredns-74ff55c5b-2mwgh
	13a0bb2537888       89d73d416b992       8 minutes ago       Exited              kindnet-cni                 0                   ddcf25d2ba16b       kindnet-gkbqq
	0084076706589       25a5233254979       8 minutes ago       Exited              kube-proxy                  0                   f23c4fb3ddf66       kube-proxy-h4r8m
	e895300ffd7e6       e7605f88f17d6       8 minutes ago       Exited              kube-scheduler              0                   a133f7724b055       kube-scheduler-old-k8s-version-337794
	78a0fe9b19212       05b738aa1bc63       8 minutes ago       Exited              etcd                        0                   30e85537c352b       etcd-old-k8s-version-337794
	cfdcf7bafc98d       2c08bbbc02d3a       8 minutes ago       Exited              kube-apiserver              0                   f9c4d472a3d3e       kube-apiserver-old-k8s-version-337794
	6ce85447f9024       1df8a2b116bd1       8 minutes ago       Exited              kube-controller-manager     0                   6074652e21301       kube-controller-manager-old-k8s-version-337794
	
	
	==> containerd <==
	Jun 20 18:47:32 old-k8s-version-337794 containerd[570]: time="2024-06-20T18:47:32.528462583Z" level=info msg="CreateContainer within sandbox \"8034c0f16fc737b1b093f946dad82b0a4ebe7044c0ca885f31d670d27d4518a5\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:4,} returns container id \"5dd93664e850ff339a43265f009cbba0749f04161b991c22807e19e45cada55f\""
	Jun 20 18:47:32 old-k8s-version-337794 containerd[570]: time="2024-06-20T18:47:32.529393541Z" level=info msg="StartContainer for \"5dd93664e850ff339a43265f009cbba0749f04161b991c22807e19e45cada55f\""
	Jun 20 18:47:32 old-k8s-version-337794 containerd[570]: time="2024-06-20T18:47:32.601955886Z" level=info msg="StartContainer for \"5dd93664e850ff339a43265f009cbba0749f04161b991c22807e19e45cada55f\" returns successfully"
	Jun 20 18:47:32 old-k8s-version-337794 containerd[570]: time="2024-06-20T18:47:32.633326827Z" level=info msg="shim disconnected" id=5dd93664e850ff339a43265f009cbba0749f04161b991c22807e19e45cada55f
	Jun 20 18:47:32 old-k8s-version-337794 containerd[570]: time="2024-06-20T18:47:32.633389547Z" level=warning msg="cleaning up after shim disconnected" id=5dd93664e850ff339a43265f009cbba0749f04161b991c22807e19e45cada55f namespace=k8s.io
	Jun 20 18:47:32 old-k8s-version-337794 containerd[570]: time="2024-06-20T18:47:32.633404357Z" level=info msg="cleaning up dead shim"
	Jun 20 18:47:32 old-k8s-version-337794 containerd[570]: time="2024-06-20T18:47:32.641179702Z" level=warning msg="cleanup warnings time=\"2024-06-20T18:47:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2879 runtime=io.containerd.runc.v2\n"
	Jun 20 18:47:33 old-k8s-version-337794 containerd[570]: time="2024-06-20T18:47:33.212921134Z" level=info msg="RemoveContainer for \"617df317f643773dc452ca1c757eed657fb39be04050ea848a8f289481502df4\""
	Jun 20 18:47:33 old-k8s-version-337794 containerd[570]: time="2024-06-20T18:47:33.220421239Z" level=info msg="RemoveContainer for \"617df317f643773dc452ca1c757eed657fb39be04050ea848a8f289481502df4\" returns successfully"
	Jun 20 18:48:38 old-k8s-version-337794 containerd[570]: time="2024-06-20T18:48:38.503898489Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jun 20 18:48:38 old-k8s-version-337794 containerd[570]: time="2024-06-20T18:48:38.508340937Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" host=fake.domain
	Jun 20 18:48:38 old-k8s-version-337794 containerd[570]: time="2024-06-20T18:48:38.510082426Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Jun 20 18:49:00 old-k8s-version-337794 containerd[570]: time="2024-06-20T18:49:00.505659953Z" level=info msg="CreateContainer within sandbox \"8034c0f16fc737b1b093f946dad82b0a4ebe7044c0ca885f31d670d27d4518a5\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:5,}"
	Jun 20 18:49:00 old-k8s-version-337794 containerd[570]: time="2024-06-20T18:49:00.529410410Z" level=info msg="CreateContainer within sandbox \"8034c0f16fc737b1b093f946dad82b0a4ebe7044c0ca885f31d670d27d4518a5\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:5,} returns container id \"de5c94d9a849971f51404389ff5596adb34621d7121c4770dff3a957a9a3898f\""
	Jun 20 18:49:00 old-k8s-version-337794 containerd[570]: time="2024-06-20T18:49:00.530000233Z" level=info msg="StartContainer for \"de5c94d9a849971f51404389ff5596adb34621d7121c4770dff3a957a9a3898f\""
	Jun 20 18:49:00 old-k8s-version-337794 containerd[570]: time="2024-06-20T18:49:00.597972335Z" level=info msg="StartContainer for \"de5c94d9a849971f51404389ff5596adb34621d7121c4770dff3a957a9a3898f\" returns successfully"
	Jun 20 18:49:00 old-k8s-version-337794 containerd[570]: time="2024-06-20T18:49:00.625041479Z" level=info msg="shim disconnected" id=de5c94d9a849971f51404389ff5596adb34621d7121c4770dff3a957a9a3898f
	Jun 20 18:49:00 old-k8s-version-337794 containerd[570]: time="2024-06-20T18:49:00.625102279Z" level=warning msg="cleaning up after shim disconnected" id=de5c94d9a849971f51404389ff5596adb34621d7121c4770dff3a957a9a3898f namespace=k8s.io
	Jun 20 18:49:00 old-k8s-version-337794 containerd[570]: time="2024-06-20T18:49:00.625113200Z" level=info msg="cleaning up dead shim"
	Jun 20 18:49:00 old-k8s-version-337794 containerd[570]: time="2024-06-20T18:49:00.634441229Z" level=warning msg="cleanup warnings time=\"2024-06-20T18:49:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3132 runtime=io.containerd.runc.v2\n"
	Jun 20 18:49:01 old-k8s-version-337794 containerd[570]: time="2024-06-20T18:49:01.388597602Z" level=info msg="RemoveContainer for \"5dd93664e850ff339a43265f009cbba0749f04161b991c22807e19e45cada55f\""
	Jun 20 18:49:01 old-k8s-version-337794 containerd[570]: time="2024-06-20T18:49:01.393719251Z" level=info msg="RemoveContainer for \"5dd93664e850ff339a43265f009cbba0749f04161b991c22807e19e45cada55f\" returns successfully"
	Jun 20 18:51:28 old-k8s-version-337794 containerd[570]: time="2024-06-20T18:51:28.504075313Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jun 20 18:51:28 old-k8s-version-337794 containerd[570]: time="2024-06-20T18:51:28.510680649Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" host=fake.domain
	Jun 20 18:51:28 old-k8s-version-337794 containerd[570]: time="2024-06-20T18:51:28.512493482Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	
	
	==> coredns [3b0caff456f550681c8dc6f7ce0c9af69317ec12ac39d8a6e1fa2040ea4c8a50] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 093a0bf1423dd8c4eee62372bb216168
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:57989 - 24400 "HINFO IN 794100208768980362.5126620339222536844. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.050481578s
	
	
	==> coredns [41ba1c27ef09f4269353d6500282257c6dc0ee5cc4906cc9d3f99fdacb2835c1] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 093a0bf1423dd8c4eee62372bb216168
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:36977 - 47511 "HINFO IN 3496524688972058563.6213800918317370044. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.030240903s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-337794
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-337794
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a5bfa5828b76fe92a3c5f89a54d8c76f6b5f3f8b
	                    minikube.k8s.io/name=old-k8s-version-337794
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_20T18_42_43_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 20 Jun 2024 18:42:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-337794
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 20 Jun 2024 18:51:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 20 Jun 2024 18:46:37 +0000   Thu, 20 Jun 2024 18:42:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 20 Jun 2024 18:46:37 +0000   Thu, 20 Jun 2024 18:42:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 20 Jun 2024 18:46:37 +0000   Thu, 20 Jun 2024 18:42:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 20 Jun 2024 18:46:37 +0000   Thu, 20 Jun 2024 18:42:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-337794
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022356Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022356Ki
	  pods:               110
	System Info:
	  Machine ID:                 b8ed3ce8e679449f809f3beec55cb524
	  System UUID:                d2a8bafb-35e4-4e32-918c-7f042d50cb78
	  Boot ID:                    53ebbd48-d2f2-463f-9f24-ddaca7e7841c
	  Kernel Version:             5.15.0-1063-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.33
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m37s
	  kube-system                 coredns-74ff55c5b-2mwgh                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     8m32s
	  kube-system                 etcd-old-k8s-version-337794                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         8m38s
	  kube-system                 kindnet-gkbqq                                     100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      8m32s
	  kube-system                 kube-apiserver-old-k8s-version-337794             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m38s
	  kube-system                 kube-controller-manager-old-k8s-version-337794    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m38s
	  kube-system                 kube-proxy-h4r8m                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m32s
	  kube-system                 kube-scheduler-old-k8s-version-337794             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m38s
	  kube-system                 metrics-server-9975d5f86-s95qr                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (2%!)(MISSING)       0 (0%!)(MISSING)         6m25s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m31s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-kwshm         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m25s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-9j6rr               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%!)(MISSING)  100m (5%!)(MISSING)
	  memory             420Mi (5%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  100Mi (0%!)(MISSING)  0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 8m58s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m58s (x5 over 8m58s)  kubelet     Node old-k8s-version-337794 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m58s (x4 over 8m58s)  kubelet     Node old-k8s-version-337794 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m58s (x4 over 8m58s)  kubelet     Node old-k8s-version-337794 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m58s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 8m39s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m39s                  kubelet     Node old-k8s-version-337794 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m39s                  kubelet     Node old-k8s-version-337794 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m39s                  kubelet     Node old-k8s-version-337794 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m38s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m32s                  kubelet     Node old-k8s-version-337794 status is now: NodeReady
	  Normal  Starting                 8m30s                  kube-proxy  Starting kube-proxy.
	  Normal  Starting                 5m55s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m55s (x8 over 5m55s)  kubelet     Node old-k8s-version-337794 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m55s (x8 over 5m55s)  kubelet     Node old-k8s-version-337794 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m55s (x7 over 5m55s)  kubelet     Node old-k8s-version-337794 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m55s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m40s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[  +0.001041] FS-Cache: O-key=[8] 'db3a5c0100000000'
	[  +0.000687] FS-Cache: N-cookie c=0000001e [p=00000015 fl=2 nc=0 na=1]
	[  +0.001030] FS-Cache: N-cookie d=00000000c97d82c6{9p.inode} n=000000009e8a7ac6
	[  +0.001035] FS-Cache: N-key=[8] 'db3a5c0100000000'
	[  +0.003582] FS-Cache: Duplicate cookie detected
	[  +0.000717] FS-Cache: O-cookie c=00000018 [p=00000015 fl=226 nc=0 na=1]
	[  +0.000943] FS-Cache: O-cookie d=00000000c97d82c6{9p.inode} n=0000000062e3044e
	[  +0.001046] FS-Cache: O-key=[8] 'db3a5c0100000000'
	[  +0.000707] FS-Cache: N-cookie c=0000001f [p=00000015 fl=2 nc=0 na=1]
	[  +0.000911] FS-Cache: N-cookie d=00000000c97d82c6{9p.inode} n=0000000053d330c2
	[  +0.001013] FS-Cache: N-key=[8] 'db3a5c0100000000'
	[  +2.904604] FS-Cache: Duplicate cookie detected
	[  +0.000734] FS-Cache: O-cookie c=00000016 [p=00000015 fl=226 nc=0 na=1]
	[  +0.000982] FS-Cache: O-cookie d=00000000c97d82c6{9p.inode} n=0000000094842109
	[  +0.001130] FS-Cache: O-key=[8] 'da3a5c0100000000'
	[  +0.000722] FS-Cache: N-cookie c=00000021 [p=00000015 fl=2 nc=0 na=1]
	[  +0.000956] FS-Cache: N-cookie d=00000000c97d82c6{9p.inode} n=000000009e8a7ac6
	[  +0.001069] FS-Cache: N-key=[8] 'da3a5c0100000000'
	[  +0.390252] FS-Cache: Duplicate cookie detected
	[  +0.000783] FS-Cache: O-cookie c=0000001b [p=00000015 fl=226 nc=0 na=1]
	[  +0.001087] FS-Cache: O-cookie d=00000000c97d82c6{9p.inode} n=00000000e57313d6
	[  +0.001281] FS-Cache: O-key=[8] 'e03a5c0100000000'
	[  +0.000785] FS-Cache: N-cookie c=00000022 [p=00000015 fl=2 nc=0 na=1]
	[  +0.000984] FS-Cache: N-cookie d=00000000c97d82c6{9p.inode} n=00000000059264de
	[  +0.001167] FS-Cache: N-key=[8] 'e03a5c0100000000'
	
	
	==> etcd [78a0fe9b19212a5f753e82266fc6eda58535c93f7b45a9f7ae1027d529ba1b9f] <==
	raft2024/06/20 18:42:32 INFO: 9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2
	raft2024/06/20 18:42:32 INFO: 9f0758e1c58a86ed became leader at term 2
	raft2024/06/20 18:42:32 INFO: raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2
	2024-06-20 18:42:32.804810 I | etcdserver: setting up the initial cluster version to 3.4
	2024-06-20 18:42:32.805056 I | etcdserver: published {Name:old-k8s-version-337794 ClientURLs:[https://192.168.85.2:2379]} to cluster 68eaea490fab4e05
	2024-06-20 18:42:32.805322 I | embed: ready to serve client requests
	2024-06-20 18:42:32.810953 I | embed: serving client requests on 127.0.0.1:2379
	2024-06-20 18:42:32.811272 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-06-20 18:42:32.813660 I | etcdserver/api: enabled capabilities for version 3.4
	2024-06-20 18:42:32.813766 I | embed: ready to serve client requests
	2024-06-20 18:42:32.816274 I | embed: serving client requests on 192.168.85.2:2379
	2024-06-20 18:42:54.755392 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-06-20 18:43:02.358000 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-06-20 18:43:12.357943 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-06-20 18:43:22.358052 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-06-20 18:43:32.357904 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-06-20 18:43:42.357971 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-06-20 18:43:52.357989 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-06-20 18:44:02.357892 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-06-20 18:44:12.358125 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-06-20 18:44:22.357853 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-06-20 18:44:32.357856 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-06-20 18:44:42.358048 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-06-20 18:44:52.367744 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-06-20 18:45:02.358052 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> etcd [d10f227a579ece3ba7f135984b5533ab1954b830dbfcff6951b3b52ca343d8c6] <==
	2024-06-20 18:47:27.391362 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-06-20 18:47:37.391229 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-06-20 18:47:47.391208 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-06-20 18:47:57.391059 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-06-20 18:48:07.391434 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-06-20 18:48:17.391304 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-06-20 18:48:27.391195 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-06-20 18:48:37.391373 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-06-20 18:48:47.391360 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-06-20 18:48:57.391236 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-06-20 18:49:07.391304 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-06-20 18:49:17.391100 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-06-20 18:49:27.391175 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-06-20 18:49:37.391192 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-06-20 18:49:47.391305 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-06-20 18:49:57.391275 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-06-20 18:50:07.391205 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-06-20 18:50:17.391145 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-06-20 18:50:27.391217 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-06-20 18:50:37.391276 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-06-20 18:50:47.391242 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-06-20 18:50:57.391355 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-06-20 18:51:07.391433 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-06-20 18:51:17.391522 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-06-20 18:51:27.391264 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 18:51:29 up  2:33,  0 users,  load average: 0.76, 1.88, 2.35
	Linux old-k8s-version-337794 5.15.0-1063-aws #69~20.04.1-Ubuntu SMP Fri May 10 19:21:30 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [13a0bb253788862ae22e6058b7f190fba11dd8db1d86a9df13f8b2aeb42fcecb] <==
	I0620 18:43:00.824264       1 main.go:227] handling current node
	I0620 18:43:10.842458       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0620 18:43:10.842699       1 main.go:227] handling current node
	I0620 18:43:20.850535       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0620 18:43:20.850563       1 main.go:227] handling current node
	I0620 18:43:30.865609       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0620 18:43:30.865639       1 main.go:227] handling current node
	I0620 18:43:40.882319       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0620 18:43:40.882347       1 main.go:227] handling current node
	I0620 18:43:50.897502       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0620 18:43:50.897528       1 main.go:227] handling current node
	I0620 18:44:00.909910       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0620 18:44:00.909945       1 main.go:227] handling current node
	I0620 18:44:10.924920       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0620 18:44:10.924946       1 main.go:227] handling current node
	I0620 18:44:20.931481       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0620 18:44:20.931511       1 main.go:227] handling current node
	I0620 18:44:30.949514       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0620 18:44:30.949543       1 main.go:227] handling current node
	I0620 18:44:40.956803       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0620 18:44:40.956829       1 main.go:227] handling current node
	I0620 18:44:50.969947       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0620 18:44:50.969975       1 main.go:227] handling current node
	I0620 18:45:00.981270       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0620 18:45:00.981305       1 main.go:227] handling current node
	
	
	==> kindnet [e098983ce65fd9a05371c1e9237d75438fa40f748e0a679589f472f0a39a08e8] <==
	I0620 18:49:20.645401       1 main.go:227] handling current node
	I0620 18:49:30.664248       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0620 18:49:30.664280       1 main.go:227] handling current node
	I0620 18:49:40.682472       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0620 18:49:40.682504       1 main.go:227] handling current node
	I0620 18:49:50.691738       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0620 18:49:50.691770       1 main.go:227] handling current node
	I0620 18:50:00.705549       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0620 18:50:00.705582       1 main.go:227] handling current node
	I0620 18:50:10.715324       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0620 18:50:10.715354       1 main.go:227] handling current node
	I0620 18:50:20.732789       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0620 18:50:20.732821       1 main.go:227] handling current node
	I0620 18:50:30.742554       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0620 18:50:30.742584       1 main.go:227] handling current node
	I0620 18:50:40.754746       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0620 18:50:40.754777       1 main.go:227] handling current node
	I0620 18:50:50.764016       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0620 18:50:50.764046       1 main.go:227] handling current node
	I0620 18:51:00.779890       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0620 18:51:00.779917       1 main.go:227] handling current node
	I0620 18:51:10.794855       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0620 18:51:10.794889       1 main.go:227] handling current node
	I0620 18:51:20.811206       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0620 18:51:20.811235       1 main.go:227] handling current node
	
	
	==> kube-apiserver [ab9a5dc87140998b57eded669bdfe78785f54034fe3076c3679faee0c5e8b168] <==
	I0620 18:48:21.697615       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0620 18:48:21.697626       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0620 18:48:49.915154       1 handler_proxy.go:102] no RequestInfo found in the context
	E0620 18:48:49.915387       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0620 18:48:49.915405       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0620 18:49:01.748376       1 client.go:360] parsed scheme: "passthrough"
	I0620 18:49:01.748420       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0620 18:49:01.748430       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0620 18:49:37.741251       1 client.go:360] parsed scheme: "passthrough"
	I0620 18:49:37.741298       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0620 18:49:37.741309       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0620 18:50:12.200495       1 client.go:360] parsed scheme: "passthrough"
	I0620 18:50:12.200540       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0620 18:50:12.200549       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0620 18:50:44.591832       1 client.go:360] parsed scheme: "passthrough"
	I0620 18:50:44.591875       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0620 18:50:44.591885       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0620 18:50:48.115895       1 handler_proxy.go:102] no RequestInfo found in the context
	E0620 18:50:48.116154       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0620 18:50:48.116171       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0620 18:51:27.242034       1 client.go:360] parsed scheme: "passthrough"
	I0620 18:51:27.242078       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0620 18:51:27.242087       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-apiserver [cfdcf7bafc98db0ed8b117a7f81af17f4aa0ffe0d900af8fb469323b24ada207] <==
	I0620 18:42:39.962113       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0620 18:42:40.505535       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0620 18:42:40.554517       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0620 18:42:40.720451       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I0620 18:42:40.721684       1 controller.go:606] quota admission added evaluator for: endpoints
	I0620 18:42:40.725612       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0620 18:42:41.610505       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0620 18:42:42.325443       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0620 18:42:42.416573       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0620 18:42:50.861838       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0620 18:42:57.618379       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I0620 18:42:57.688139       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I0620 18:43:06.870453       1 client.go:360] parsed scheme: "passthrough"
	I0620 18:43:06.870496       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0620 18:43:06.870505       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0620 18:43:43.291456       1 client.go:360] parsed scheme: "passthrough"
	I0620 18:43:43.291506       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0620 18:43:43.291515       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0620 18:44:25.430353       1 client.go:360] parsed scheme: "passthrough"
	I0620 18:44:25.430412       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0620 18:44:25.430420       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0620 18:44:57.517570       1 client.go:360] parsed scheme: "passthrough"
	I0620 18:44:57.517672       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0620 18:44:57.517893       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	E0620 18:45:02.173496       1 upgradeaware.go:387] Error proxying data from backend to client: write tcp 192.168.85.2:8443->192.168.85.1:54174: write: connection reset by peer
	
	
	==> kube-controller-manager [6ce85447f9024fd431a26a603f1a93ee2dd0f45da6feddb6ce6fee810b5fdd37] <==
	I0620 18:42:57.658680       1 shared_informer.go:247] Caches are synced for deployment 
	I0620 18:42:57.693565       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-gkbqq"
	I0620 18:42:57.706786       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
	I0620 18:42:57.707056       1 shared_informer.go:247] Caches are synced for ReplicaSet 
	I0620 18:42:57.708266       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0620 18:42:57.715548       1 shared_informer.go:247] Caches are synced for disruption 
	I0620 18:42:57.715732       1 disruption.go:339] Sending events to api server.
	I0620 18:42:57.720277       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0620 18:42:57.733887       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-h4r8m"
	I0620 18:42:57.758066       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-5sczq"
	I0620 18:42:57.767222       1 shared_informer.go:247] Caches are synced for resource quota 
	I0620 18:42:57.824199       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-2mwgh"
	I0620 18:42:57.918215       1 shared_informer.go:240] Waiting for caches to sync for resource quota
	I0620 18:42:57.918260       1 shared_informer.go:247] Caches are synced for resource quota 
	I0620 18:42:57.930487       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0620 18:42:58.213469       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0620 18:42:58.213494       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0620 18:42:58.231222       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0620 18:42:59.041811       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0620 18:42:59.055799       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-5sczq"
	I0620 18:43:02.708255       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b-2mwgh" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/coredns-74ff55c5b-2mwgh"
	I0620 18:43:02.708587       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b-5sczq" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/coredns-74ff55c5b-5sczq"
	I0620 18:43:02.708687       1 event.go:291] "Event occurred" object="kube-system/storage-provisioner" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/storage-provisioner"
	I0620 18:43:02.709014       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0620 18:45:03.003644       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	
	
	==> kube-controller-manager [ae831abf0d9b3aec7fd99a96e612f60ca28994573225480bd39d669115fd9323] <==
	E0620 18:47:05.634630       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0620 18:47:10.125967       1 request.go:655] Throttling request took 1.048144496s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0620 18:47:10.977664       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0620 18:47:36.136616       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0620 18:47:42.628130       1 request.go:655] Throttling request took 1.048484407s, request: GET:https://192.168.85.2:8443/apis/autoscaling/v2beta2?timeout=32s
	W0620 18:47:43.479549       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0620 18:48:06.638286       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0620 18:48:15.129999       1 request.go:655] Throttling request took 1.048436309s, request: GET:https://192.168.85.2:8443/apis/policy/v1beta1?timeout=32s
	W0620 18:48:15.981592       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0620 18:48:37.140367       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0620 18:48:47.631941       1 request.go:655] Throttling request took 1.048317173s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0620 18:48:48.483429       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0620 18:49:07.643167       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0620 18:49:20.200936       1 request.go:655] Throttling request took 1.048373332s, request: GET:https://192.168.85.2:8443/apis/rbac.authorization.k8s.io/v1beta1?timeout=32s
	W0620 18:49:21.052570       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0620 18:49:38.145103       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0620 18:49:52.702953       1 request.go:655] Throttling request took 1.048225647s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0620 18:49:53.554430       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0620 18:50:08.646980       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0620 18:50:25.205043       1 request.go:655] Throttling request took 1.048271621s, request: GET:https://192.168.85.2:8443/apis/node.k8s.io/v1beta1?timeout=32s
	W0620 18:50:26.056468       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0620 18:50:39.149085       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0620 18:50:57.706913       1 request.go:655] Throttling request took 1.047944948s, request: GET:https://192.168.85.2:8443/apis/networking.k8s.io/v1beta1?timeout=32s
	W0620 18:50:58.558368       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0620 18:51:09.650717       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	
	==> kube-proxy [0084076706589b3159aa05b37a0ef12c102644140788b191724b0f7267fd4e3c] <==
	I0620 18:42:59.951539       1 node.go:172] Successfully retrieved node IP: 192.168.85.2
	I0620 18:42:59.951624       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
	W0620 18:42:59.976997       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0620 18:42:59.977095       1 server_others.go:185] Using iptables Proxier.
	I0620 18:42:59.977325       1 server.go:650] Version: v1.20.0
	I0620 18:42:59.977839       1 config.go:315] Starting service config controller
	I0620 18:42:59.977859       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0620 18:42:59.980449       1 config.go:224] Starting endpoint slice config controller
	I0620 18:42:59.980485       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0620 18:43:00.088614       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0620 18:43:00.088677       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-proxy [fd105e227c64f742488149d7a7ef12f65582c9a17ebf40af0d31772fff23b850] <==
	I0620 18:45:49.120431       1 node.go:172] Successfully retrieved node IP: 192.168.85.2
	I0620 18:45:49.120558       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
	W0620 18:45:49.161346       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0620 18:45:49.161430       1 server_others.go:185] Using iptables Proxier.
	I0620 18:45:49.161633       1 server.go:650] Version: v1.20.0
	I0620 18:45:49.162123       1 config.go:315] Starting service config controller
	I0620 18:45:49.162132       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0620 18:45:49.164163       1 config.go:224] Starting endpoint slice config controller
	I0620 18:45:49.164173       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0620 18:45:49.262263       1 shared_informer.go:247] Caches are synced for service config 
	I0620 18:45:49.264312       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-scheduler [216bb91d7edcfdb7e26e452ac2980c11fb6d169bc9ec8ff9d4125cefc89b4af8] <==
	I0620 18:45:40.391731       1 serving.go:331] Generated self-signed cert in-memory
	W0620 18:45:46.898260       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0620 18:45:46.899047       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0620 18:45:46.899109       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0620 18:45:46.899135       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0620 18:45:47.208090       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0620 18:45:47.208172       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0620 18:45:47.208179       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0620 18:45:47.208190       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0620 18:45:47.308723       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kube-scheduler [e895300ffd7e6f784d9567baacfb9b233853b31245990e391675289e62ad7659] <==
	W0620 18:42:39.068134       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0620 18:42:39.068162       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0620 18:42:39.068176       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0620 18:42:39.068181       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0620 18:42:39.170774       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0620 18:42:39.180152       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0620 18:42:39.180366       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0620 18:42:39.180500       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0620 18:42:39.189269       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0620 18:42:39.192942       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0620 18:42:39.194637       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0620 18:42:39.194785       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0620 18:42:39.207435       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0620 18:42:39.207557       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0620 18:42:39.207633       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0620 18:42:39.207704       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0620 18:42:39.207783       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0620 18:42:39.218486       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0620 18:42:39.218858       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0620 18:42:39.219070       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0620 18:42:40.053621       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0620 18:42:40.173611       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0620 18:42:40.195768       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0620 18:42:40.264077       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0620 18:42:40.581984       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Jun 20 18:49:57 old-k8s-version-337794 kubelet[665]: E0620 18:49:57.503168     665 pod_workers.go:191] Error syncing pod 8ce729fc-f814-4c22-b0f3-aeb9f468865b ("metrics-server-9975d5f86-s95qr_kube-system(8ce729fc-f814-4c22-b0f3-aeb9f468865b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jun 20 18:50:04 old-k8s-version-337794 kubelet[665]: I0620 18:50:04.502954     665 scope.go:95] [topologymanager] RemoveContainer - Container ID: de5c94d9a849971f51404389ff5596adb34621d7121c4770dff3a957a9a3898f
	Jun 20 18:50:04 old-k8s-version-337794 kubelet[665]: E0620 18:50:04.503416     665 pod_workers.go:191] Error syncing pod be9a2daf-d4c5-4163-815e-337ffd00fcb7 ("dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"
	Jun 20 18:50:08 old-k8s-version-337794 kubelet[665]: E0620 18:50:08.504863     665 pod_workers.go:191] Error syncing pod 8ce729fc-f814-4c22-b0f3-aeb9f468865b ("metrics-server-9975d5f86-s95qr_kube-system(8ce729fc-f814-4c22-b0f3-aeb9f468865b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jun 20 18:50:16 old-k8s-version-337794 kubelet[665]: I0620 18:50:16.502456     665 scope.go:95] [topologymanager] RemoveContainer - Container ID: de5c94d9a849971f51404389ff5596adb34621d7121c4770dff3a957a9a3898f
	Jun 20 18:50:16 old-k8s-version-337794 kubelet[665]: E0620 18:50:16.503299     665 pod_workers.go:191] Error syncing pod be9a2daf-d4c5-4163-815e-337ffd00fcb7 ("dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"
	Jun 20 18:50:19 old-k8s-version-337794 kubelet[665]: E0620 18:50:19.503499     665 pod_workers.go:191] Error syncing pod 8ce729fc-f814-4c22-b0f3-aeb9f468865b ("metrics-server-9975d5f86-s95qr_kube-system(8ce729fc-f814-4c22-b0f3-aeb9f468865b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jun 20 18:50:30 old-k8s-version-337794 kubelet[665]: I0620 18:50:30.502438     665 scope.go:95] [topologymanager] RemoveContainer - Container ID: de5c94d9a849971f51404389ff5596adb34621d7121c4770dff3a957a9a3898f
	Jun 20 18:50:30 old-k8s-version-337794 kubelet[665]: E0620 18:50:30.503459     665 pod_workers.go:191] Error syncing pod be9a2daf-d4c5-4163-815e-337ffd00fcb7 ("dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"
	Jun 20 18:50:34 old-k8s-version-337794 kubelet[665]: E0620 18:50:34.504128     665 pod_workers.go:191] Error syncing pod 8ce729fc-f814-4c22-b0f3-aeb9f468865b ("metrics-server-9975d5f86-s95qr_kube-system(8ce729fc-f814-4c22-b0f3-aeb9f468865b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jun 20 18:50:44 old-k8s-version-337794 kubelet[665]: I0620 18:50:44.502881     665 scope.go:95] [topologymanager] RemoveContainer - Container ID: de5c94d9a849971f51404389ff5596adb34621d7121c4770dff3a957a9a3898f
	Jun 20 18:50:44 old-k8s-version-337794 kubelet[665]: E0620 18:50:44.503802     665 pod_workers.go:191] Error syncing pod be9a2daf-d4c5-4163-815e-337ffd00fcb7 ("dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"
	Jun 20 18:50:48 old-k8s-version-337794 kubelet[665]: E0620 18:50:48.503323     665 pod_workers.go:191] Error syncing pod 8ce729fc-f814-4c22-b0f3-aeb9f468865b ("metrics-server-9975d5f86-s95qr_kube-system(8ce729fc-f814-4c22-b0f3-aeb9f468865b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jun 20 18:50:55 old-k8s-version-337794 kubelet[665]: I0620 18:50:55.502421     665 scope.go:95] [topologymanager] RemoveContainer - Container ID: de5c94d9a849971f51404389ff5596adb34621d7121c4770dff3a957a9a3898f
	Jun 20 18:50:55 old-k8s-version-337794 kubelet[665]: E0620 18:50:55.503245     665 pod_workers.go:191] Error syncing pod be9a2daf-d4c5-4163-815e-337ffd00fcb7 ("dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"
	Jun 20 18:51:03 old-k8s-version-337794 kubelet[665]: E0620 18:51:03.503156     665 pod_workers.go:191] Error syncing pod 8ce729fc-f814-4c22-b0f3-aeb9f468865b ("metrics-server-9975d5f86-s95qr_kube-system(8ce729fc-f814-4c22-b0f3-aeb9f468865b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jun 20 18:51:08 old-k8s-version-337794 kubelet[665]: I0620 18:51:08.506103     665 scope.go:95] [topologymanager] RemoveContainer - Container ID: de5c94d9a849971f51404389ff5596adb34621d7121c4770dff3a957a9a3898f
	Jun 20 18:51:08 old-k8s-version-337794 kubelet[665]: E0620 18:51:08.506428     665 pod_workers.go:191] Error syncing pod be9a2daf-d4c5-4163-815e-337ffd00fcb7 ("dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"
	Jun 20 18:51:17 old-k8s-version-337794 kubelet[665]: E0620 18:51:17.503358     665 pod_workers.go:191] Error syncing pod 8ce729fc-f814-4c22-b0f3-aeb9f468865b ("metrics-server-9975d5f86-s95qr_kube-system(8ce729fc-f814-4c22-b0f3-aeb9f468865b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jun 20 18:51:23 old-k8s-version-337794 kubelet[665]: I0620 18:51:23.502398     665 scope.go:95] [topologymanager] RemoveContainer - Container ID: de5c94d9a849971f51404389ff5596adb34621d7121c4770dff3a957a9a3898f
	Jun 20 18:51:23 old-k8s-version-337794 kubelet[665]: E0620 18:51:23.503312     665 pod_workers.go:191] Error syncing pod be9a2daf-d4c5-4163-815e-337ffd00fcb7 ("dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kwshm_kubernetes-dashboard(be9a2daf-d4c5-4163-815e-337ffd00fcb7)"
	Jun 20 18:51:28 old-k8s-version-337794 kubelet[665]: E0620 18:51:28.512840     665 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host
	Jun 20 18:51:28 old-k8s-version-337794 kubelet[665]: E0620 18:51:28.513356     665 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host
	Jun 20 18:51:28 old-k8s-version-337794 kubelet[665]: E0620 18:51:28.513570     665 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-5jskx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exec
:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-s95qr_kube-system(8ce729f
c-f814-4c22-b0f3-aeb9f468865b): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host
	Jun 20 18:51:28 old-k8s-version-337794 kubelet[665]: E0620 18:51:28.513737     665 pod_workers.go:191] Error syncing pod 8ce729fc-f814-4c22-b0f3-aeb9f468865b ("metrics-server-9975d5f86-s95qr_kube-system(8ce729fc-f814-4c22-b0f3-aeb9f468865b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	
	
	==> kubernetes-dashboard [9b335eeb8b67e4c38ce7a7e9a7bc9f20eb7a66185e54a8a71a8ecb76c3a9aa5f] <==
	2024/06/20 18:46:14 Using namespace: kubernetes-dashboard
	2024/06/20 18:46:14 Using in-cluster config to connect to apiserver
	2024/06/20 18:46:14 Using secret token for csrf signing
	2024/06/20 18:46:14 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/06/20 18:46:15 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/06/20 18:46:15 Successful initial request to the apiserver, version: v1.20.0
	2024/06/20 18:46:15 Generating JWE encryption key
	2024/06/20 18:46:15 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/06/20 18:46:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/06/20 18:46:15 Initializing JWE encryption key from synchronized object
	2024/06/20 18:46:15 Creating in-cluster Sidecar client
	2024/06/20 18:46:15 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/06/20 18:46:15 Serving insecurely on HTTP port: 9090
	2024/06/20 18:46:45 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/06/20 18:47:15 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/06/20 18:47:45 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/06/20 18:48:15 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/06/20 18:48:45 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/06/20 18:49:15 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/06/20 18:49:45 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/06/20 18:50:15 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/06/20 18:50:45 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/06/20 18:51:15 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/06/20 18:46:14 Starting overwatch
	
	
	==> storage-provisioner [2361035748bd059e7b4f11b7715ec3ccdb81adbf542d803180ebb8bec389d445] <==
	I0620 18:43:30.323812       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0620 18:43:30.338729       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0620 18:43:30.338968       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0620 18:43:30.347755       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0620 18:43:30.348092       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-337794_281ffb95-e7bf-46d6-88ca-6e980c98d3be!
	I0620 18:43:30.349549       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bdd411a0-d2bf-4430-8124-3a8fcc95c861", APIVersion:"v1", ResourceVersion:"517", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-337794_281ffb95-e7bf-46d6-88ca-6e980c98d3be became leader
	I0620 18:43:30.449284       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-337794_281ffb95-e7bf-46d6-88ca-6e980c98d3be!
	
	
	==> storage-provisioner [eed6e9e374d62971b45702bee247f112b7994fb1614a6637e94d738953631a39] <==
	I0620 18:45:51.278724       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0620 18:45:51.290752       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0620 18:45:51.290814       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0620 18:46:08.786594       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0620 18:46:08.793773       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-337794_10b395e9-3076-4c29-bf9d-45adf0d0ccc4!
	I0620 18:46:08.798920       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bdd411a0-d2bf-4430-8124-3a8fcc95c861", APIVersion:"v1", ResourceVersion:"805", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-337794_10b395e9-3076-4c29-bf9d-45adf0d0ccc4 became leader
	I0620 18:46:08.894002       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-337794_10b395e9-3076-4c29-bf9d-45adf0d0ccc4!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-337794 -n old-k8s-version-337794
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-337794 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-s95qr
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-337794 describe pod metrics-server-9975d5f86-s95qr
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-337794 describe pod metrics-server-9975d5f86-s95qr: exit status 1 (146.599155ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-9975d5f86-s95qr" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-337794 describe pod metrics-server-9975d5f86-s95qr: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (372.61s)

                                                
                                    

Test pass (293/328)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 11.17
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.2
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.30.2/json-events 6.02
13 TestDownloadOnly/v1.30.2/preload-exists 0
17 TestDownloadOnly/v1.30.2/LogsDuration 0.07
18 TestDownloadOnly/v1.30.2/DeleteAll 0.21
19 TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.53
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.09
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 220.7
29 TestAddons/parallel/Registry 16.25
31 TestAddons/parallel/InspektorGadget 10.78
32 TestAddons/parallel/MetricsServer 6.77
35 TestAddons/parallel/CSI 49.6
36 TestAddons/parallel/Headlamp 11.05
37 TestAddons/parallel/CloudSpanner 5.6
38 TestAddons/parallel/LocalPath 51.69
39 TestAddons/parallel/NvidiaDevicePlugin 5.74
40 TestAddons/parallel/Yakd 6.01
41 TestAddons/parallel/Volcano 160.57
44 TestAddons/serial/GCPAuth/Namespaces 0.17
45 TestAddons/StoppedEnableDisable 12.4
46 TestCertOptions 38.75
47 TestCertExpiration 231.45
49 TestForceSystemdFlag 41.04
50 TestForceSystemdEnv 39.4
51 TestDockerEnvContainerd 136.57
56 TestErrorSpam/setup 31.05
57 TestErrorSpam/start 0.75
58 TestErrorSpam/status 0.96
59 TestErrorSpam/pause 1.66
60 TestErrorSpam/unpause 1.73
61 TestErrorSpam/stop 1.42
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 63.68
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 6.07
68 TestFunctional/serial/KubeContext 0.06
69 TestFunctional/serial/KubectlGetPods 0.09
72 TestFunctional/serial/CacheCmd/cache/add_remote 14.45
73 TestFunctional/serial/CacheCmd/cache/add_local 1.36
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
75 TestFunctional/serial/CacheCmd/cache/list 0.06
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
77 TestFunctional/serial/CacheCmd/cache/cache_reload 2.11
78 TestFunctional/serial/CacheCmd/cache/delete 0.11
79 TestFunctional/serial/MinikubeKubectlCmd 0.13
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
81 TestFunctional/serial/ExtraConfig 29.61
82 TestFunctional/serial/ComponentHealth 0.1
83 TestFunctional/serial/LogsCmd 1.44
84 TestFunctional/serial/LogsFileCmd 1.44
85 TestFunctional/serial/InvalidService 4.45
87 TestFunctional/parallel/ConfigCmd 0.45
88 TestFunctional/parallel/DashboardCmd 10.03
89 TestFunctional/parallel/DryRun 0.43
90 TestFunctional/parallel/InternationalLanguage 0.22
91 TestFunctional/parallel/StatusCmd 1.23
95 TestFunctional/parallel/ServiceCmdConnect 10.7
96 TestFunctional/parallel/AddonsCmd 0.17
97 TestFunctional/parallel/PersistentVolumeClaim 26.34
99 TestFunctional/parallel/SSHCmd 0.66
100 TestFunctional/parallel/CpCmd 2.1
102 TestFunctional/parallel/FileSync 0.28
103 TestFunctional/parallel/CertSync 2.05
107 TestFunctional/parallel/NodeLabels 0.09
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.53
111 TestFunctional/parallel/License 0.25
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.7
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.44
117 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.12
118 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
122 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
123 TestFunctional/parallel/ServiceCmd/DeployApp 7.23
124 TestFunctional/parallel/ProfileCmd/profile_not_create 0.48
125 TestFunctional/parallel/ServiceCmd/List 0.59
126 TestFunctional/parallel/ProfileCmd/profile_list 0.41
127 TestFunctional/parallel/ServiceCmd/JSONOutput 0.6
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.49
129 TestFunctional/parallel/ServiceCmd/HTTPS 0.54
130 TestFunctional/parallel/MountCmd/any-port 7.66
131 TestFunctional/parallel/ServiceCmd/Format 0.36
132 TestFunctional/parallel/ServiceCmd/URL 0.48
133 TestFunctional/parallel/MountCmd/specific-port 2.01
134 TestFunctional/parallel/MountCmd/VerifyCleanup 2.5
135 TestFunctional/parallel/Version/short 0.07
136 TestFunctional/parallel/Version/components 1.34
137 TestFunctional/parallel/ImageCommands/ImageListShort 0.29
138 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
139 TestFunctional/parallel/ImageCommands/ImageListJson 0.33
140 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
141 TestFunctional/parallel/ImageCommands/ImageBuild 2.76
142 TestFunctional/parallel/ImageCommands/Setup 1.84
143 TestFunctional/parallel/UpdateContextCmd/no_changes 0.18
144 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.2
145 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.21
150 TestFunctional/parallel/ImageCommands/ImageRemove 0.46
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.62
153 TestFunctional/delete_addon-resizer_images 0.08
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.01
159 TestMultiControlPlane/serial/StartCluster 132.26
160 TestMultiControlPlane/serial/DeployApp 28.38
161 TestMultiControlPlane/serial/PingHostFromPods 1.61
162 TestMultiControlPlane/serial/AddWorkerNode 20.94
163 TestMultiControlPlane/serial/NodeLabels 0.14
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.76
165 TestMultiControlPlane/serial/CopyFile 19.05
166 TestMultiControlPlane/serial/StopSecondaryNode 12.9
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.57
168 TestMultiControlPlane/serial/RestartSecondaryNode 27.73
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.8
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 128.48
171 TestMultiControlPlane/serial/DeleteSecondaryNode 11.65
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.56
173 TestMultiControlPlane/serial/StopCluster 35.99
174 TestMultiControlPlane/serial/RestartCluster 70.13
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.52
176 TestMultiControlPlane/serial/AddSecondaryNode 45.1
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.73
181 TestJSONOutput/start/Command 58.52
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.73
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.68
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 5.75
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.23
206 TestKicCustomNetwork/create_custom_network 38.47
207 TestKicCustomNetwork/use_default_bridge_network 35.44
208 TestKicExistingNetwork 33.55
209 TestKicCustomSubnet 34.43
210 TestKicStaticIP 37.25
211 TestMainNoArgs 0.05
212 TestMinikubeProfile 63.57
215 TestMountStart/serial/StartWithMountFirst 6.87
216 TestMountStart/serial/VerifyMountFirst 0.25
217 TestMountStart/serial/StartWithMountSecond 6.12
218 TestMountStart/serial/VerifyMountSecond 0.25
219 TestMountStart/serial/DeleteFirst 1.62
220 TestMountStart/serial/VerifyMountPostDelete 0.25
221 TestMountStart/serial/Stop 1.2
222 TestMountStart/serial/RestartStopped 7.57
223 TestMountStart/serial/VerifyMountPostStop 0.26
226 TestMultiNode/serial/FreshStart2Nodes 76.98
227 TestMultiNode/serial/DeployApp2Nodes 5.19
228 TestMultiNode/serial/PingHostFrom2Pods 0.98
229 TestMultiNode/serial/AddNode 16.8
230 TestMultiNode/serial/MultiNodeLabels 0.09
231 TestMultiNode/serial/ProfileList 0.33
232 TestMultiNode/serial/CopyFile 9.89
233 TestMultiNode/serial/StopNode 2.23
234 TestMultiNode/serial/StartAfterStop 9.55
235 TestMultiNode/serial/RestartKeepsNodes 82.74
236 TestMultiNode/serial/DeleteNode 5.42
237 TestMultiNode/serial/StopMultiNode 24.03
238 TestMultiNode/serial/RestartMultiNode 50.5
239 TestMultiNode/serial/ValidateNameConflict 35.99
244 TestPreload 120.54
246 TestScheduledStopUnix 106.34
249 TestInsufficientStorage 10.15
250 TestRunningBinaryUpgrade 82.76
252 TestKubernetesUpgrade 351.16
253 TestMissingContainerUpgrade 174.91
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
256 TestNoKubernetes/serial/StartWithK8s 39.78
257 TestNoKubernetes/serial/StartWithStopK8s 17.37
258 TestNoKubernetes/serial/Start 6.78
259 TestNoKubernetes/serial/VerifyK8sNotRunning 0.32
260 TestNoKubernetes/serial/ProfileList 1.06
261 TestNoKubernetes/serial/Stop 1.27
262 TestNoKubernetes/serial/StartNoArgs 7.55
263 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.4
264 TestStoppedBinaryUpgrade/Setup 0.68
265 TestStoppedBinaryUpgrade/Upgrade 88.75
266 TestStoppedBinaryUpgrade/MinikubeLogs 1.34
275 TestPause/serial/Start 64.23
276 TestPause/serial/SecondStartNoReconfiguration 7.18
277 TestPause/serial/Pause 0.98
278 TestPause/serial/VerifyStatus 0.49
279 TestPause/serial/Unpause 0.89
280 TestPause/serial/PauseAgain 1.25
281 TestPause/serial/DeletePaused 2.89
282 TestPause/serial/VerifyDeletedResources 0.18
290 TestNetworkPlugins/group/false 5.07
295 TestStartStop/group/old-k8s-version/serial/FirstStart 174.1
296 TestStartStop/group/old-k8s-version/serial/DeployApp 9.66
298 TestStartStop/group/no-preload/serial/FirstStart 75.38
299 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.33
300 TestStartStop/group/old-k8s-version/serial/Stop 14.63
301 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
303 TestStartStop/group/no-preload/serial/DeployApp 8.42
304 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.18
305 TestStartStop/group/no-preload/serial/Stop 12.07
306 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
307 TestStartStop/group/no-preload/serial/SecondStart 297.68
308 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
309 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
310 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.12
311 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.15
312 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.28
313 TestStartStop/group/old-k8s-version/serial/Pause 3.48
314 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.3
315 TestStartStop/group/no-preload/serial/Pause 3.75
317 TestStartStop/group/embed-certs/serial/FirstStart 95.7
319 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 98.22
320 TestStartStop/group/embed-certs/serial/DeployApp 9.34
321 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.34
322 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.12
323 TestStartStop/group/embed-certs/serial/Stop 12.01
324 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.07
325 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.08
326 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.17
327 TestStartStop/group/embed-certs/serial/SecondStart 269.99
328 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.23
329 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 271.68
330 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
331 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.11
332 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
333 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.15
334 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
335 TestStartStop/group/embed-certs/serial/Pause 3.09
336 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.32
337 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.87
339 TestStartStop/group/newest-cni/serial/FirstStart 49.59
340 TestNetworkPlugins/group/auto/Start 90.13
341 TestStartStop/group/newest-cni/serial/DeployApp 0
342 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.08
343 TestStartStop/group/newest-cni/serial/Stop 1.22
344 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
345 TestStartStop/group/newest-cni/serial/SecondStart 17.69
346 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
347 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
348 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.32
349 TestStartStop/group/newest-cni/serial/Pause 3.04
350 TestNetworkPlugins/group/kindnet/Start 90.08
351 TestNetworkPlugins/group/auto/KubeletFlags 0.37
352 TestNetworkPlugins/group/auto/NetCatPod 9.34
353 TestNetworkPlugins/group/auto/DNS 0.24
354 TestNetworkPlugins/group/auto/Localhost 0.22
355 TestNetworkPlugins/group/auto/HairPin 0.24
356 TestNetworkPlugins/group/calico/Start 70.93
357 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
358 TestNetworkPlugins/group/kindnet/KubeletFlags 0.33
359 TestNetworkPlugins/group/kindnet/NetCatPod 11.34
360 TestNetworkPlugins/group/kindnet/DNS 0.2
361 TestNetworkPlugins/group/kindnet/Localhost 0.24
362 TestNetworkPlugins/group/kindnet/HairPin 0.22
363 TestNetworkPlugins/group/calico/ControllerPod 6.01
364 TestNetworkPlugins/group/custom-flannel/Start 68.6
365 TestNetworkPlugins/group/calico/KubeletFlags 0.37
366 TestNetworkPlugins/group/calico/NetCatPod 10.32
367 TestNetworkPlugins/group/calico/DNS 0.26
368 TestNetworkPlugins/group/calico/Localhost 0.22
369 TestNetworkPlugins/group/calico/HairPin 0.2
370 TestNetworkPlugins/group/enable-default-cni/Start 48.98
371 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.31
372 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.42
373 TestNetworkPlugins/group/custom-flannel/DNS 0.2
374 TestNetworkPlugins/group/custom-flannel/Localhost 0.18
375 TestNetworkPlugins/group/custom-flannel/HairPin 0.17
376 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.39
377 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.43
378 TestNetworkPlugins/group/enable-default-cni/DNS 0.23
379 TestNetworkPlugins/group/enable-default-cni/Localhost 0.19
380 TestNetworkPlugins/group/enable-default-cni/HairPin 0.18
381 TestNetworkPlugins/group/flannel/Start 66.64
382 TestNetworkPlugins/group/bridge/Start 55.74
383 TestNetworkPlugins/group/flannel/ControllerPod 6.01
384 TestNetworkPlugins/group/flannel/KubeletFlags 0.29
385 TestNetworkPlugins/group/flannel/NetCatPod 11.25
386 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
387 TestNetworkPlugins/group/bridge/NetCatPod 10.27
388 TestNetworkPlugins/group/flannel/DNS 0.2
389 TestNetworkPlugins/group/flannel/Localhost 0.17
390 TestNetworkPlugins/group/flannel/HairPin 0.19
391 TestNetworkPlugins/group/bridge/DNS 0.27
392 TestNetworkPlugins/group/bridge/Localhost 0.26
393 TestNetworkPlugins/group/bridge/HairPin 0.23
x
+
TestDownloadOnly/v1.20.0/json-events (11.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-678265 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-678265 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (11.164379686s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (11.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-678265
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-678265: exit status 85 (69.355744ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-678265 | jenkins | v1.33.1 | 20 Jun 24 17:55 UTC |          |
	|         | -p download-only-678265        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/20 17:55:14
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0620 17:55:14.233392  279676 out.go:291] Setting OutFile to fd 1 ...
	I0620 17:55:14.233524  279676 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0620 17:55:14.233534  279676 out.go:304] Setting ErrFile to fd 2...
	I0620 17:55:14.233539  279676 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0620 17:55:14.233808  279676 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19106-274269/.minikube/bin
	W0620 17:55:14.233941  279676 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19106-274269/.minikube/config/config.json: open /home/jenkins/minikube-integration/19106-274269/.minikube/config/config.json: no such file or directory
	I0620 17:55:14.234360  279676 out.go:298] Setting JSON to true
	I0620 17:55:14.235269  279676 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":5865,"bootTime":1718900250,"procs":166,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1063-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0620 17:55:14.235341  279676 start.go:139] virtualization:  
	I0620 17:55:14.238191  279676 out.go:97] [download-only-678265] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	W0620 17:55:14.238326  279676 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/19106-274269/.minikube/cache/preloaded-tarball: no such file or directory
	I0620 17:55:14.238367  279676 notify.go:220] Checking for updates...
	I0620 17:55:14.240254  279676 out.go:169] MINIKUBE_LOCATION=19106
	I0620 17:55:14.242440  279676 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0620 17:55:14.244496  279676 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19106-274269/kubeconfig
	I0620 17:55:14.246164  279676 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19106-274269/.minikube
	I0620 17:55:14.248008  279676 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0620 17:55:14.251359  279676 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0620 17:55:14.251633  279676 driver.go:392] Setting default libvirt URI to qemu:///system
	I0620 17:55:14.276960  279676 docker.go:122] docker version: linux-26.1.4:Docker Engine - Community
	I0620 17:55:14.277062  279676 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0620 17:55:14.331338  279676 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-06-20 17:55:14.321775182 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1063-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214892544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d2d58213f83a351ca8f528a95fbd145f5654e957 Expected:d2d58213f83a351ca8f528a95fbd145f5654e957} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.1]] Warnings:<nil>}}
	I0620 17:55:14.331446  279676 docker.go:295] overlay module found
	I0620 17:55:14.333561  279676 out.go:97] Using the docker driver based on user configuration
	I0620 17:55:14.333588  279676 start.go:297] selected driver: docker
	I0620 17:55:14.333594  279676 start.go:901] validating driver "docker" against <nil>
	I0620 17:55:14.333706  279676 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0620 17:55:14.384708  279676 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-06-20 17:55:14.375227651 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1063-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214892544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d2d58213f83a351ca8f528a95fbd145f5654e957 Expected:d2d58213f83a351ca8f528a95fbd145f5654e957} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.1]] Warnings:<nil>}}
	I0620 17:55:14.384887  279676 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0620 17:55:14.385154  279676 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0620 17:55:14.385306  279676 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0620 17:55:14.387787  279676 out.go:169] Using Docker driver with root privileges
	I0620 17:55:14.390083  279676 cni.go:84] Creating CNI manager for ""
	I0620 17:55:14.390112  279676 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0620 17:55:14.390130  279676 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0620 17:55:14.390221  279676 start.go:340] cluster config:
	{Name:download-only-678265 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718753665-19106@sha256:735aacbd61d487240dc39ba6e4d70dd6ae1ad6181ca2ba092d372605e48ee636 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-678265 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0620 17:55:14.391991  279676 out.go:97] Starting "download-only-678265" primary control-plane node in "download-only-678265" cluster
	I0620 17:55:14.392013  279676 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0620 17:55:14.393603  279676 out.go:97] Pulling base image v0.0.44-1718753665-19106 ...
	I0620 17:55:14.393641  279676 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0620 17:55:14.393671  279676 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718753665-19106@sha256:735aacbd61d487240dc39ba6e4d70dd6ae1ad6181ca2ba092d372605e48ee636 in local docker daemon
	I0620 17:55:14.410937  279676 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718753665-19106@sha256:735aacbd61d487240dc39ba6e4d70dd6ae1ad6181ca2ba092d372605e48ee636 to local cache
	I0620 17:55:14.411168  279676 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718753665-19106@sha256:735aacbd61d487240dc39ba6e4d70dd6ae1ad6181ca2ba092d372605e48ee636 in local cache directory
	I0620 17:55:14.411266  279676 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718753665-19106@sha256:735aacbd61d487240dc39ba6e4d70dd6ae1ad6181ca2ba092d372605e48ee636 to local cache
	I0620 17:55:14.460813  279676 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0620 17:55:14.460858  279676 cache.go:56] Caching tarball of preloaded images
	I0620 17:55:14.462097  279676 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0620 17:55:14.464400  279676 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0620 17:55:14.464440  279676 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0620 17:55:14.546742  279676 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:7e3d48ccb9f143791669d02e14ce1643 -> /home/jenkins/minikube-integration/19106-274269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0620 17:55:20.094130  279676 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718753665-19106@sha256:735aacbd61d487240dc39ba6e4d70dd6ae1ad6181ca2ba092d372605e48ee636 as a tarball
	I0620 17:55:21.227343  279676 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0620 17:55:21.227437  279676 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/19106-274269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0620 17:55:22.311385  279676 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0620 17:55:22.311764  279676 profile.go:143] Saving config to /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/download-only-678265/config.json ...
	I0620 17:55:22.311801  279676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/download-only-678265/config.json: {Name:mk8a0912ea5a776e0ed8bd107dc223b06b1c416d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0620 17:55:22.311995  279676 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0620 17:55:22.312198  279676 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/19106-274269/.minikube/cache/linux/arm64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-678265 host does not exist
	  To start a cluster, run: "minikube start -p download-only-678265"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-678265
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/json-events (6.02s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-636496 --force --alsologtostderr --kubernetes-version=v1.30.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-636496 --force --alsologtostderr --kubernetes-version=v1.30.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (6.020031161s)
--- PASS: TestDownloadOnly/v1.30.2/json-events (6.02s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/preload-exists
--- PASS: TestDownloadOnly/v1.30.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-636496
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-636496: exit status 85 (69.377634ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-678265 | jenkins | v1.33.1 | 20 Jun 24 17:55 UTC |                     |
	|         | -p download-only-678265        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 20 Jun 24 17:55 UTC | 20 Jun 24 17:55 UTC |
	| delete  | -p download-only-678265        | download-only-678265 | jenkins | v1.33.1 | 20 Jun 24 17:55 UTC | 20 Jun 24 17:55 UTC |
	| start   | -o=json --download-only        | download-only-636496 | jenkins | v1.33.1 | 20 Jun 24 17:55 UTC |                     |
	|         | -p download-only-636496        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/20 17:55:25
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0620 17:55:25.797108  279847 out.go:291] Setting OutFile to fd 1 ...
	I0620 17:55:25.797294  279847 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0620 17:55:25.797322  279847 out.go:304] Setting ErrFile to fd 2...
	I0620 17:55:25.797345  279847 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0620 17:55:25.797606  279847 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19106-274269/.minikube/bin
	I0620 17:55:25.798021  279847 out.go:298] Setting JSON to true
	I0620 17:55:25.798886  279847 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":5876,"bootTime":1718900250,"procs":162,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1063-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0620 17:55:25.798982  279847 start.go:139] virtualization:  
	I0620 17:55:25.807764  279847 out.go:97] [download-only-636496] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0620 17:55:25.808002  279847 notify.go:220] Checking for updates...
	I0620 17:55:25.813036  279847 out.go:169] MINIKUBE_LOCATION=19106
	I0620 17:55:25.815600  279847 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0620 17:55:25.817315  279847 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19106-274269/kubeconfig
	I0620 17:55:25.819510  279847 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19106-274269/.minikube
	I0620 17:55:25.821732  279847 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0620 17:55:25.825784  279847 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0620 17:55:25.826032  279847 driver.go:392] Setting default libvirt URI to qemu:///system
	I0620 17:55:25.853021  279847 docker.go:122] docker version: linux-26.1.4:Docker Engine - Community
	I0620 17:55:25.853133  279847 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0620 17:55:25.909270  279847 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-06-20 17:55:25.899921866 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1063-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214892544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d2d58213f83a351ca8f528a95fbd145f5654e957 Expected:d2d58213f83a351ca8f528a95fbd145f5654e957} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.1]] Warnings:<nil>}}
	I0620 17:55:25.909387  279847 docker.go:295] overlay module found
	I0620 17:55:25.911294  279847 out.go:97] Using the docker driver based on user configuration
	I0620 17:55:25.911322  279847 start.go:297] selected driver: docker
	I0620 17:55:25.911329  279847 start.go:901] validating driver "docker" against <nil>
	I0620 17:55:25.911440  279847 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0620 17:55:25.970392  279847 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-06-20 17:55:25.961443555 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1063-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214892544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d2d58213f83a351ca8f528a95fbd145f5654e957 Expected:d2d58213f83a351ca8f528a95fbd145f5654e957} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.1]] Warnings:<nil>}}
	I0620 17:55:25.970607  279847 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0620 17:55:25.970894  279847 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0620 17:55:25.971080  279847 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0620 17:55:25.978585  279847 out.go:169] Using Docker driver with root privileges
	I0620 17:55:25.986530  279847 cni.go:84] Creating CNI manager for ""
	I0620 17:55:25.986567  279847 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0620 17:55:25.986585  279847 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0620 17:55:25.986686  279847 start.go:340] cluster config:
	{Name:download-only-636496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718753665-19106@sha256:735aacbd61d487240dc39ba6e4d70dd6ae1ad6181ca2ba092d372605e48ee636 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:download-only-636496 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0620 17:55:25.991148  279847 out.go:97] Starting "download-only-636496" primary control-plane node in "download-only-636496" cluster
	I0620 17:55:25.991171  279847 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0620 17:55:25.992959  279847 out.go:97] Pulling base image v0.0.44-1718753665-19106 ...
	I0620 17:55:25.992984  279847 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime containerd
	I0620 17:55:25.993148  279847 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718753665-19106@sha256:735aacbd61d487240dc39ba6e4d70dd6ae1ad6181ca2ba092d372605e48ee636 in local docker daemon
	I0620 17:55:26.012860  279847 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718753665-19106@sha256:735aacbd61d487240dc39ba6e4d70dd6ae1ad6181ca2ba092d372605e48ee636 to local cache
	I0620 17:55:26.012992  279847 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718753665-19106@sha256:735aacbd61d487240dc39ba6e4d70dd6ae1ad6181ca2ba092d372605e48ee636 in local cache directory
	I0620 17:55:26.013017  279847 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718753665-19106@sha256:735aacbd61d487240dc39ba6e4d70dd6ae1ad6181ca2ba092d372605e48ee636 in local cache directory, skipping pull
	I0620 17:55:26.013022  279847 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718753665-19106@sha256:735aacbd61d487240dc39ba6e4d70dd6ae1ad6181ca2ba092d372605e48ee636 exists in cache, skipping pull
	I0620 17:55:26.013033  279847 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718753665-19106@sha256:735aacbd61d487240dc39ba6e4d70dd6ae1ad6181ca2ba092d372605e48ee636 as a tarball
	I0620 17:55:26.045410  279847 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.2/preloaded-images-k8s-v18-v1.30.2-containerd-overlay2-arm64.tar.lz4
	I0620 17:55:26.045441  279847 cache.go:56] Caching tarball of preloaded images
	I0620 17:55:26.045635  279847 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime containerd
	I0620 17:55:26.047836  279847 out.go:97] Downloading Kubernetes v1.30.2 preload ...
	I0620 17:55:26.047857  279847 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.2-containerd-overlay2-arm64.tar.lz4 ...
	I0620 17:55:26.143450  279847 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.2/preloaded-images-k8s-v18-v1.30.2-containerd-overlay2-arm64.tar.lz4?checksum=md5:5f38272c206cc90312ddc23a9bcf8a1f -> /home/jenkins/minikube-integration/19106-274269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-containerd-overlay2-arm64.tar.lz4
	I0620 17:55:30.226388  279847 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.30.2-containerd-overlay2-arm64.tar.lz4 ...
	I0620 17:55:30.226531  279847 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/19106-274269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-containerd-overlay2-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-636496 host does not exist
	  To start a cluster, run: "minikube start -p download-only-636496"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.2/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-636496
--- PASS: TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.53s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-252165 --alsologtostderr --binary-mirror http://127.0.0.1:34775 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-252165" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-252165
--- PASS: TestBinaryMirror (0.53s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-527088
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-527088: exit status 85 (84.826904ms)

                                                
                                                
-- stdout --
	* Profile "addons-527088" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-527088"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-527088
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-527088: exit status 85 (76.808473ms)

                                                
                                                
-- stdout --
	* Profile "addons-527088" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-527088"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (220.7s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-527088 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-527088 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns: (3m40.693900664s)
--- PASS: TestAddons/Setup (220.70s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.25s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 39.001776ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6dksb" [3be3b3bd-56ca-4f7d-9f6b-057cc5818b82] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004494019s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-kdb4q" [5cbce6b3-228b-4728-b364-a7df88294438] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004560756s
addons_test.go:342: (dbg) Run:  kubectl --context addons-527088 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-527088 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-527088 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.114252778s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-arm64 -p addons-527088 ip
2024/06/20 17:59:29 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-arm64 -p addons-527088 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.25s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.78s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-nsbfc" [93131955-6e31-40f6-8c56-4e56817ca270] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004753808s
addons_test.go:843: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-527088
addons_test.go:843: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-527088: (5.776120269s)
--- PASS: TestAddons/parallel/InspektorGadget (10.78s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.77s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 3.118881ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-4cgsc" [07aa09b6-d3f1-4d78-9265-b99c9de1ab03] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.006053614s
addons_test.go:417: (dbg) Run:  kubectl --context addons-527088 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-arm64 -p addons-527088 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.77s)

                                                
                                    
x
+
TestAddons/parallel/CSI (49.6s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:563: csi-hostpath-driver pods stabilized in 7.007846ms
addons_test.go:566: (dbg) Run:  kubectl --context addons-527088 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-527088 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-527088 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-527088 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-527088 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-527088 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-527088 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-527088 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-527088 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-527088 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-527088 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-527088 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-527088 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-527088 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:576: (dbg) Run:  kubectl --context addons-527088 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:581: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [9e326630-77c6-40c4-9b15-c8dba0ef41c0] Pending
helpers_test.go:344: "task-pv-pod" [9e326630-77c6-40c4-9b15-c8dba0ef41c0] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [9e326630-77c6-40c4-9b15-c8dba0ef41c0] Running
addons_test.go:581: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.004215665s
addons_test.go:586: (dbg) Run:  kubectl --context addons-527088 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:591: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-527088 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-527088 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:596: (dbg) Run:  kubectl --context addons-527088 delete pod task-pv-pod
addons_test.go:596: (dbg) Done: kubectl --context addons-527088 delete pod task-pv-pod: (1.653124057s)
addons_test.go:602: (dbg) Run:  kubectl --context addons-527088 delete pvc hpvc
addons_test.go:608: (dbg) Run:  kubectl --context addons-527088 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:613: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-527088 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-527088 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-527088 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-527088 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-527088 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-527088 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-527088 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-527088 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:618: (dbg) Run:  kubectl --context addons-527088 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:623: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [13ce610f-ab94-4196-aacc-5eb2726cce91] Pending
helpers_test.go:344: "task-pv-pod-restore" [13ce610f-ab94-4196-aacc-5eb2726cce91] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [13ce610f-ab94-4196-aacc-5eb2726cce91] Running
addons_test.go:623: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004046132s
addons_test.go:628: (dbg) Run:  kubectl --context addons-527088 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Run:  kubectl --context addons-527088 delete pvc hpvc-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-527088 delete volumesnapshot new-snapshot-demo
addons_test.go:640: (dbg) Run:  out/minikube-linux-arm64 -p addons-527088 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:640: (dbg) Done: out/minikube-linux-arm64 -p addons-527088 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.71360326s)
addons_test.go:644: (dbg) Run:  out/minikube-linux-arm64 -p addons-527088 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (49.60s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.05s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:826: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-527088 --alsologtostderr -v=1
addons_test.go:826: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-527088 --alsologtostderr -v=1: (1.040346299s)
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7fc69f7444-kpc7m" [7c39eeac-f8a6-48f1-ad80-4e3a887680be] Pending
helpers_test.go:344: "headlamp-7fc69f7444-kpc7m" [7c39eeac-f8a6-48f1-ad80-4e3a887680be] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7fc69f7444-kpc7m" [7c39eeac-f8a6-48f1-ad80-4e3a887680be] Running
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003487516s
--- PASS: TestAddons/parallel/Headlamp (11.05s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.6s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-h4qxf" [e6005728-bada-450c-ba88-598ed4e267cd] Running
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004184691s
addons_test.go:862: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-527088
--- PASS: TestAddons/parallel/CloudSpanner (5.60s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.69s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:974: (dbg) Run:  kubectl --context addons-527088 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:980: (dbg) Run:  kubectl --context addons-527088 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:984: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-527088 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-527088 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-527088 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-527088 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-527088 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [4b92845d-cff5-49c8-9df8-af6e42fa281f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [4b92845d-cff5-49c8-9df8-af6e42fa281f] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [4b92845d-cff5-49c8-9df8-af6e42fa281f] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.005584397s
addons_test.go:992: (dbg) Run:  kubectl --context addons-527088 get pvc test-pvc -o=json
addons_test.go:1001: (dbg) Run:  out/minikube-linux-arm64 -p addons-527088 ssh "cat /opt/local-path-provisioner/pvc-6d99b03d-25b3-47de-a9a4-a709a8b14304_default_test-pvc/file1"
addons_test.go:1013: (dbg) Run:  kubectl --context addons-527088 delete pod test-local-path
addons_test.go:1017: (dbg) Run:  kubectl --context addons-527088 delete pvc test-pvc
addons_test.go:1021: (dbg) Run:  out/minikube-linux-arm64 -p addons-527088 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1021: (dbg) Done: out/minikube-linux-arm64 -p addons-527088 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.449516731s)
--- PASS: TestAddons/parallel/LocalPath (51.69s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.74s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-kmqzl" [251afb28-9d62-40dc-807a-5b8184c7ca8e] Running
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.00541221s
addons_test.go:1056: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-527088
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.74s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-5ddbf7d777-4rbvh" [b78ce710-8506-4054-8e18-36edca487c5f] Running
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004428277s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/parallel/Volcano (160.57s)

                                                
                                                
=== RUN   TestAddons/parallel/Volcano
=== PAUSE TestAddons/parallel/Volcano

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Volcano
addons_test.go:889: volcano-scheduler stabilized in 4.315176ms
addons_test.go:905: volcano-controller stabilized in 5.178083ms
addons_test.go:897: volcano-admission stabilized in 6.615947ms
addons_test.go:911: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-765f888978-nghn4" [54fe7412-8f2e-4a28-a8c7-13a92529ee79] Running
addons_test.go:911: (dbg) TestAddons/parallel/Volcano: app=volcano-scheduler healthy within 5.009415302s
addons_test.go:915: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-7b497cf95b-6w8bz" [37706d93-dd80-4ee7-9315-18d4069e5a59] Running
addons_test.go:915: (dbg) TestAddons/parallel/Volcano: app=volcano-admission healthy within 5.006205081s
addons_test.go:919: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controller-86c5446455-9pbvq" [b4eda0ad-b639-44a6-8423-626a9e18f5ab] Running
addons_test.go:919: (dbg) TestAddons/parallel/Volcano: app=volcano-controller healthy within 6.004276317s
addons_test.go:924: (dbg) Run:  kubectl --context addons-527088 delete -n volcano-system job volcano-admission-init
addons_test.go:930: (dbg) Run:  kubectl --context addons-527088 create -f testdata/vcjob.yaml
addons_test.go:938: (dbg) Run:  kubectl --context addons-527088 get vcjob -n my-volcano
addons_test.go:956: (dbg) TestAddons/parallel/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [1cb2fe83-5322-487b-a75a-109b9b3751ad] Pending
helpers_test.go:344: "test-job-nginx-0" [1cb2fe83-5322-487b-a75a-109b9b3751ad] Pending: PodScheduled:Unschedulable (all nodes are unavailable: 1 node(s) resource fit failed.)
helpers_test.go:344: "test-job-nginx-0" [1cb2fe83-5322-487b-a75a-109b9b3751ad] Pending: PodScheduled:Schedulable (Pod my-volcano/test-job-nginx-0 can possibly be assigned to addons-527088 once resource is released)
helpers_test.go:344: "test-job-nginx-0" [1cb2fe83-5322-487b-a75a-109b9b3751ad] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [1cb2fe83-5322-487b-a75a-109b9b3751ad] Running
addons_test.go:956: (dbg) TestAddons/parallel/Volcano: volcano.sh/job-name=test-job healthy within 2m14.005196879s
addons_test.go:960: (dbg) Run:  out/minikube-linux-arm64 -p addons-527088 addons disable volcano --alsologtostderr -v=1
addons_test.go:960: (dbg) Done: out/minikube-linux-arm64 -p addons-527088 addons disable volcano --alsologtostderr -v=1: (10.021823598s)
--- PASS: TestAddons/parallel/Volcano (160.57s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:652: (dbg) Run:  kubectl --context addons-527088 create ns new-namespace
addons_test.go:666: (dbg) Run:  kubectl --context addons-527088 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.4s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-527088
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-527088: (12.131733851s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-527088
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-527088
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-527088
--- PASS: TestAddons/StoppedEnableDisable (12.40s)

                                                
                                    
x
+
TestCertOptions (38.75s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-344744 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-344744 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (35.902180701s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-344744 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-344744 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-344744 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-344744" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-344744
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-344744: (2.183912922s)
--- PASS: TestCertOptions (38.75s)

                                                
                                    
x
+
TestCertExpiration (231.45s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-611852 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-611852 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (42.812286306s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-611852 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-611852 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (6.262414021s)
helpers_test.go:175: Cleaning up "cert-expiration-611852" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-611852
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-611852: (2.370377018s)
--- PASS: TestCertExpiration (231.45s)

                                                
                                    
x
+
TestForceSystemdFlag (41.04s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-939130 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-939130 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (38.556355274s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-939130 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-939130" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-939130
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-939130: (2.093658552s)
--- PASS: TestForceSystemdFlag (41.04s)

                                                
                                    
x
+
TestForceSystemdEnv (39.4s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-380218 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-380218 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (36.867764865s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-380218 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-380218" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-380218
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-380218: (2.196841332s)
--- PASS: TestForceSystemdEnv (39.40s)

                                                
                                    
x
+
TestDockerEnvContainerd (136.57s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-368303 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-368303 --driver=docker  --container-runtime=containerd: (29.922989332s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-368303"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-368303": (1.081166739s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-dUOk5i0OeunD/agent.298831" SSH_AGENT_PID="298832" DOCKER_HOST=ssh://docker@127.0.0.1:33148 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-dUOk5i0OeunD/agent.298831" SSH_AGENT_PID="298832" DOCKER_HOST=ssh://docker@127.0.0.1:33148 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
E0620 18:04:14.356219  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/addons-527088/client.crt: no such file or directory
E0620 18:04:14.363675  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/addons-527088/client.crt: no such file or directory
E0620 18:04:14.373988  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/addons-527088/client.crt: no such file or directory
E0620 18:04:14.394273  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/addons-527088/client.crt: no such file or directory
E0620 18:04:14.434618  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/addons-527088/client.crt: no such file or directory
E0620 18:04:14.514925  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/addons-527088/client.crt: no such file or directory
E0620 18:04:14.675332  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/addons-527088/client.crt: no such file or directory
E0620 18:04:14.995883  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/addons-527088/client.crt: no such file or directory
E0620 18:04:15.636863  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/addons-527088/client.crt: no such file or directory
E0620 18:04:16.917133  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/addons-527088/client.crt: no such file or directory
E0620 18:04:19.477276  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/addons-527088/client.crt: no such file or directory
E0620 18:04:24.598054  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/addons-527088/client.crt: no such file or directory
E0620 18:04:34.838419  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/addons-527088/client.crt: no such file or directory
E0620 18:04:55.319140  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/addons-527088/client.crt: no such file or directory
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-dUOk5i0OeunD/agent.298831" SSH_AGENT_PID="298832" DOCKER_HOST=ssh://docker@127.0.0.1:33148 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1m31.733345069s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-dUOk5i0OeunD/agent.298831" SSH_AGENT_PID="298832" DOCKER_HOST=ssh://docker@127.0.0.1:33148 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-368303" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-368303
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-368303: (2.326878912s)
--- PASS: TestDockerEnvContainerd (136.57s)

                                                
                                    
x
+
TestErrorSpam/setup (31.05s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-780822 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-780822 --driver=docker  --container-runtime=containerd
E0620 18:05:36.280143  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/addons-527088/client.crt: no such file or directory
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-780822 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-780822 --driver=docker  --container-runtime=containerd: (31.048367058s)
--- PASS: TestErrorSpam/setup (31.05s)

                                                
                                    
x
+
TestErrorSpam/start (0.75s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-780822 --log_dir /tmp/nospam-780822 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-780822 --log_dir /tmp/nospam-780822 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-780822 --log_dir /tmp/nospam-780822 start --dry-run
--- PASS: TestErrorSpam/start (0.75s)

                                                
                                    
x
+
TestErrorSpam/status (0.96s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-780822 --log_dir /tmp/nospam-780822 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-780822 --log_dir /tmp/nospam-780822 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-780822 --log_dir /tmp/nospam-780822 status
--- PASS: TestErrorSpam/status (0.96s)

                                                
                                    
x
+
TestErrorSpam/pause (1.66s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-780822 --log_dir /tmp/nospam-780822 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-780822 --log_dir /tmp/nospam-780822 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-780822 --log_dir /tmp/nospam-780822 pause
--- PASS: TestErrorSpam/pause (1.66s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.73s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-780822 --log_dir /tmp/nospam-780822 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-780822 --log_dir /tmp/nospam-780822 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-780822 --log_dir /tmp/nospam-780822 unpause
--- PASS: TestErrorSpam/unpause (1.73s)

                                                
                                    
x
+
TestErrorSpam/stop (1.42s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-780822 --log_dir /tmp/nospam-780822 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-780822 --log_dir /tmp/nospam-780822 stop: (1.230201235s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-780822 --log_dir /tmp/nospam-780822 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-780822 --log_dir /tmp/nospam-780822 stop
--- PASS: TestErrorSpam/stop (1.42s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/19106-274269/.minikube/files/etc/test/nested/copy/279671/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (63.68s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-979723 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E0620 18:06:58.201768  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/addons-527088/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-979723 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (1m3.67539264s)
--- PASS: TestFunctional/serial/StartWithProxy (63.68s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.07s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-979723 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-979723 --alsologtostderr -v=8: (6.059836409s)
functional_test.go:659: soft start took 6.073007s for "functional-979723" cluster.
--- PASS: TestFunctional/serial/SoftStart (6.07s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-979723 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (14.45s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-979723 cache add registry.k8s.io/pause:3.1: (1.646085697s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-979723 cache add registry.k8s.io/pause:3.3: (11.454004937s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-979723 cache add registry.k8s.io/pause:latest: (1.347254409s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (14.45s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.36s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-979723 /tmp/TestFunctionalserialCacheCmdcacheadd_local3069048496/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 cache add minikube-local-cache-test:functional-979723
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 cache delete minikube-local-cache-test:functional-979723
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-979723
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.36s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-979723 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (288.76467ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-979723 cache reload: (1.219853804s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 kubectl -- --context functional-979723 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-979723 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (29.61s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-979723 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-979723 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (29.604934503s)
functional_test.go:757: restart took 29.605057398s for "functional-979723" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (29.61s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-979723 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.44s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-979723 logs: (1.444512356s)
--- PASS: TestFunctional/serial/LogsCmd (1.44s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.44s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 logs --file /tmp/TestFunctionalserialLogsFileCmd1495257773/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-979723 logs --file /tmp/TestFunctionalserialLogsFileCmd1495257773/001/logs.txt: (1.440640289s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.44s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.45s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-979723 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-979723
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-979723: exit status 115 (588.698694ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31122 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-979723 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.45s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-979723 config get cpus: exit status 14 (72.22824ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-979723 config get cpus: exit status 14 (77.263764ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-979723 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-979723 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 312671: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.03s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-979723 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-979723 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (197.995608ms)

                                                
                                                
-- stdout --
	* [functional-979723] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19106
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19106-274269/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19106-274269/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0620 18:08:36.969899  312392 out.go:291] Setting OutFile to fd 1 ...
	I0620 18:08:36.970035  312392 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0620 18:08:36.970046  312392 out.go:304] Setting ErrFile to fd 2...
	I0620 18:08:36.970052  312392 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0620 18:08:36.970330  312392 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19106-274269/.minikube/bin
	I0620 18:08:36.970734  312392 out.go:298] Setting JSON to false
	I0620 18:08:36.971810  312392 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":6667,"bootTime":1718900250,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1063-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0620 18:08:36.971890  312392 start.go:139] virtualization:  
	I0620 18:08:36.974552  312392 out.go:177] * [functional-979723] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0620 18:08:36.977229  312392 out.go:177]   - MINIKUBE_LOCATION=19106
	I0620 18:08:36.977328  312392 notify.go:220] Checking for updates...
	I0620 18:08:36.981339  312392 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0620 18:08:36.983389  312392 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19106-274269/kubeconfig
	I0620 18:08:36.985465  312392 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19106-274269/.minikube
	I0620 18:08:36.987410  312392 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0620 18:08:36.989798  312392 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0620 18:08:36.992508  312392 config.go:182] Loaded profile config "functional-979723": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.2
	I0620 18:08:36.993235  312392 driver.go:392] Setting default libvirt URI to qemu:///system
	I0620 18:08:37.030723  312392 docker.go:122] docker version: linux-26.1.4:Docker Engine - Community
	I0620 18:08:37.030854  312392 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0620 18:08:37.102276  312392 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:53 SystemTime:2024-06-20 18:08:37.092425795 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1063-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214892544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d2d58213f83a351ca8f528a95fbd145f5654e957 Expected:d2d58213f83a351ca8f528a95fbd145f5654e957} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.1]] Warnings:<nil>}}
	I0620 18:08:37.102404  312392 docker.go:295] overlay module found
	I0620 18:08:37.104890  312392 out.go:177] * Using the docker driver based on existing profile
	I0620 18:08:37.106868  312392 start.go:297] selected driver: docker
	I0620 18:08:37.106891  312392 start.go:901] validating driver "docker" against &{Name:functional-979723 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718753665-19106@sha256:735aacbd61d487240dc39ba6e4d70dd6ae1ad6181ca2ba092d372605e48ee636 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:functional-979723 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0620 18:08:37.107053  312392 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0620 18:08:37.109622  312392 out.go:177] 
	W0620 18:08:37.111787  312392 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0620 18:08:37.114290  312392 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-979723 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-979723 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-979723 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (218.696469ms)

                                                
                                                
-- stdout --
	* [functional-979723] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19106
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19106-274269/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19106-274269/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0620 18:08:36.763889  312289 out.go:291] Setting OutFile to fd 1 ...
	I0620 18:08:36.764082  312289 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0620 18:08:36.764104  312289 out.go:304] Setting ErrFile to fd 2...
	I0620 18:08:36.764123  312289 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0620 18:08:36.764483  312289 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19106-274269/.minikube/bin
	I0620 18:08:36.764871  312289 out.go:298] Setting JSON to false
	I0620 18:08:36.765917  312289 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":6667,"bootTime":1718900250,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1063-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0620 18:08:36.766015  312289 start.go:139] virtualization:  
	I0620 18:08:36.777441  312289 out.go:177] * [functional-979723] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	I0620 18:08:36.786264  312289 out.go:177]   - MINIKUBE_LOCATION=19106
	I0620 18:08:36.786340  312289 notify.go:220] Checking for updates...
	I0620 18:08:36.790739  312289 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0620 18:08:36.792769  312289 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19106-274269/kubeconfig
	I0620 18:08:36.794631  312289 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19106-274269/.minikube
	I0620 18:08:36.796721  312289 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0620 18:08:36.799405  312289 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0620 18:08:36.802135  312289 config.go:182] Loaded profile config "functional-979723": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.2
	I0620 18:08:36.802809  312289 driver.go:392] Setting default libvirt URI to qemu:///system
	I0620 18:08:36.827338  312289 docker.go:122] docker version: linux-26.1.4:Docker Engine - Community
	I0620 18:08:36.827454  312289 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0620 18:08:36.901181  312289 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:53 SystemTime:2024-06-20 18:08:36.891776119 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1063-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214892544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d2d58213f83a351ca8f528a95fbd145f5654e957 Expected:d2d58213f83a351ca8f528a95fbd145f5654e957} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.1]] Warnings:<nil>}}
	I0620 18:08:36.901295  312289 docker.go:295] overlay module found
	I0620 18:08:36.907176  312289 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0620 18:08:36.909191  312289 start.go:297] selected driver: docker
	I0620 18:08:36.909213  312289 start.go:901] validating driver "docker" against &{Name:functional-979723 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1718753665-19106@sha256:735aacbd61d487240dc39ba6e4d70dd6ae1ad6181ca2ba092d372605e48ee636 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:functional-979723 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0620 18:08:36.909330  312289 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0620 18:08:36.911742  312289 out.go:177] 
	W0620 18:08:36.914079  312289 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0620 18:08:36.916155  312289 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-979723 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-979723 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-6f49f58cd5-sq7rx" [af42aa9c-40f2-481b-a91e-4590e04d0a2d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-6f49f58cd5-sq7rx" [af42aa9c-40f2-481b-a91e-4590e04d0a2d] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.006895626s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.49.2:30355
functional_test.go:1671: http://192.168.49.2:30355: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-6f49f58cd5-sq7rx

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30355
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.70s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [5cef9f33-9196-44e4-9587-d52695cb0c89] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004529736s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-979723 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-979723 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-979723 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-979723 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [02d15581-d7cc-43f8-a572-09f6fdafb423] Pending
helpers_test.go:344: "sp-pod" [02d15581-d7cc-43f8-a572-09f6fdafb423] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [02d15581-d7cc-43f8-a572-09f6fdafb423] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.004116577s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-979723 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-979723 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-979723 delete -f testdata/storage-provisioner/pod.yaml: (1.302436317s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-979723 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ff36053c-f007-4bce-b690-43683a8abc29] Pending
helpers_test.go:344: "sp-pod" [ff36053c-f007-4bce-b690-43683a8abc29] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004022553s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-979723 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.34s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 ssh -n functional-979723 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 cp functional-979723:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd6410014/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 ssh -n functional-979723 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 ssh -n functional-979723 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.10s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/279671/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 ssh "sudo cat /etc/test/nested/copy/279671/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/279671.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 ssh "sudo cat /etc/ssl/certs/279671.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/279671.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 ssh "sudo cat /usr/share/ca-certificates/279671.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/2796712.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 ssh "sudo cat /etc/ssl/certs/2796712.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/2796712.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 ssh "sudo cat /usr/share/ca-certificates/2796712.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.05s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-979723 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-979723 ssh "sudo systemctl is-active docker": exit status 1 (267.713909ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-979723 ssh "sudo systemctl is-active crio": exit status 1 (261.374407ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-979723 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-979723 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-979723 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 310234: os: process already finished
helpers_test.go:508: unable to kill pid 310085: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-979723 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-979723 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-979723 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [89337d2d-2ba6-4909-9089-cde787609666] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [89337d2d-2ba6-4909-9089-cde787609666] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.004369667s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.44s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-979723 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.99.163.196 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-979723 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-979723 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-979723 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-65f5d5cc78-wt5n6" [6e63ea9a-334a-4180-b525-157ce8d18ea4] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-65f5d5cc78-wt5n6" [6e63ea9a-334a-4180-b525-157ce8d18ea4] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.00452133s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1311: Took "345.296856ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1325: Took "62.08657ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 service list -o json
functional_test.go:1490: Took "595.145597ms" to run "out/minikube-linux-arm64 -p functional-979723 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1362: Took "404.044412ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1375: Took "82.483934ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.49.2:31126
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-979723 /tmp/TestFunctionalparallelMountCmdany-port4096094578/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1718906914144262044" to /tmp/TestFunctionalparallelMountCmdany-port4096094578/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1718906914144262044" to /tmp/TestFunctionalparallelMountCmdany-port4096094578/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1718906914144262044" to /tmp/TestFunctionalparallelMountCmdany-port4096094578/001/test-1718906914144262044
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-979723 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (390.297204ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jun 20 18:08 created-by-test
-rw-r--r-- 1 docker docker 24 Jun 20 18:08 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jun 20 18:08 test-1718906914144262044
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 ssh cat /mount-9p/test-1718906914144262044
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-979723 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [fe6c4b9d-85f1-4931-98c0-3a51a7b619c9] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [fe6c4b9d-85f1-4931-98c0-3a51a7b619c9] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [fe6c4b9d-85f1-4931-98c0-3a51a7b619c9] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003870256s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-979723 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-979723 /tmp/TestFunctionalparallelMountCmdany-port4096094578/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.66s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.49.2:31126
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-979723 /tmp/TestFunctionalparallelMountCmdspecific-port705132677/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-979723 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (498.94764ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-979723 /tmp/TestFunctionalparallelMountCmdspecific-port705132677/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-979723 ssh "sudo umount -f /mount-9p": exit status 1 (351.462873ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-979723 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-979723 /tmp/TestFunctionalparallelMountCmdspecific-port705132677/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.01s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-979723 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3140594946/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-979723 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3140594946/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-979723 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3140594946/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-979723 ssh "findmnt -T" /mount1: exit status 1 (880.68383ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-979723 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-979723 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3140594946/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-979723 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3140594946/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-979723 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3140594946/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.50s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-arm64 -p functional-979723 version -o=json --components: (1.337177062s)
--- PASS: TestFunctional/parallel/Version/components (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-979723 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.2
registry.k8s.io/kube-proxy:v1.30.2
registry.k8s.io/kube-controller-manager:v1.30.2
registry.k8s.io/kube-apiserver:v1.30.2
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-979723
docker.io/kindest/kindnetd:v20240513-cd2ac642
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-979723 image ls --format short --alsologtostderr:
I0620 18:09:03.808073  314984 out.go:291] Setting OutFile to fd 1 ...
I0620 18:09:03.808259  314984 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0620 18:09:03.808291  314984 out.go:304] Setting ErrFile to fd 2...
I0620 18:09:03.808314  314984 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0620 18:09:03.808589  314984 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19106-274269/.minikube/bin
I0620 18:09:03.809272  314984 config.go:182] Loaded profile config "functional-979723": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.2
I0620 18:09:03.809447  314984 config.go:182] Loaded profile config "functional-979723": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.2
I0620 18:09:03.810247  314984 cli_runner.go:164] Run: docker container inspect functional-979723 --format={{.State.Status}}
I0620 18:09:03.849661  314984 ssh_runner.go:195] Run: systemctl --version
I0620 18:09:03.849739  314984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-979723
I0620 18:09:03.876933  314984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/19106-274269/.minikube/machines/functional-979723/id_rsa Username:docker}
I0620 18:09:03.968902  314984 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-979723 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| docker.io/library/nginx                     | alpine             | sha256:4f4922 | 20.2MB |
| registry.k8s.io/kube-proxy                  | v1.30.2            | sha256:66dbb9 | 25.6MB |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| registry.k8s.io/pause                       | 3.9                | sha256:829e9d | 268kB  |
| docker.io/library/nginx                     | latest             | sha256:11ceee | 67.7MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/etcd                        | 3.5.12-0           | sha256:014faa | 66.2MB |
| registry.k8s.io/kube-controller-manager     | v1.30.2            | sha256:e1dcc3 | 28.4MB |
| docker.io/kindest/kindnetd                  | v20240513-cd2ac642 | sha256:89d73d | 25.8MB |
| docker.io/library/minikube-local-cache-test | functional-979723  | sha256:859a91 | 991B   |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| registry.k8s.io/coredns/coredns             | v1.11.1            | sha256:2437cf | 16.5MB |
| registry.k8s.io/kube-apiserver              | v1.30.2            | sha256:84c601 | 29.9MB |
| registry.k8s.io/kube-scheduler              | v1.30.2            | sha256:c7dd04 | 17.6MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-979723 image ls --format table --alsologtostderr:
I0620 18:09:04.439245  315122 out.go:291] Setting OutFile to fd 1 ...
I0620 18:09:04.439447  315122 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0620 18:09:04.439476  315122 out.go:304] Setting ErrFile to fd 2...
I0620 18:09:04.439498  315122 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0620 18:09:04.439824  315122 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19106-274269/.minikube/bin
I0620 18:09:04.440520  315122 config.go:182] Loaded profile config "functional-979723": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.2
I0620 18:09:04.440736  315122 config.go:182] Loaded profile config "functional-979723": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.2
I0620 18:09:04.441260  315122 cli_runner.go:164] Run: docker container inspect functional-979723 --format={{.State.Status}}
I0620 18:09:04.466851  315122 ssh_runner.go:195] Run: systemctl --version
I0620 18:09:04.466911  315122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-979723
I0620 18:09:04.496568  315122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/19106-274269/.minikube/machines/functional-979723/id_rsa Username:docker}
I0620 18:09:04.587275  315122 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-979723 image ls --format json --alsologtostderr:
[{"id":"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd","repoDigests":["registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"66189079"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:84c601f3f72c87776cdcf77a73329d1f45297e43a92508b0f289fa2fcf8872a0","repoDigests":["registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.2"],"size":"29937230"},{"id":"sha256:e1dcc3400d3ea6a268c7ea6e66c3a196703770a8e346b695f54344ab53a47567","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e"],"repoTags":["registry.k8s.io/kube-contro
ller-manager:v1.30.2"],"size":"28368865"},{"id":"sha256:66dbb96a9149f69913ff817f696be766014cacdffc2ce0889a76c81165415fae","repoDigests":["registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.2"],"size":"25633111"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:c7dd04b1bafeb51c650fde7f34ac0fdafa96030e77ea7a822135ff302d895dd5","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.2"],"size":"17643200"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:89d73d416b992e8f9602b67b4614d9e7f0655aebb3696e18efec695e0b654c40","repoDigests":["docker.io/kindest/kindnetd@sha256:
9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8"],"repoTags":["docker.io/kindest/kindnetd:v20240513-cd2ac642"],"size":"25795292"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:4f49228258b642594e55baf98d153d0e85f3fb989c1eb8450c520ed77bf27e65","repoDigests":["docker.io/library/nginx@sha256:69f8c2c72671490607f52122be2af27d4fc09657ff57e42045801aa93d2090f7"],"repoTags":["docker.io/library/nginx:alpine"],"size":"20199152"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k
8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:859a91eeec91652596348529046058e2aee35ae52d1eaf432bd19881f86ac372","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-979723"],"size":"991"},{"id":"sha256:11ceee7cdc57225711b8382e1965974bbb259de14a9f5f7d6b9f161ced50a10a","repoDigests":["docker.io/library/nginx@sha256:56b388b0d79c738f4cf51bbaf184a14fab19337f4819ceb2cae7d94100262de8"],"repoTags":["docker.io/library/nginx:latest"],"size":"67668479"},{"id":"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":["registry.k8s.io/coredns/cored
ns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"16482581"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"268051"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-979723 image ls --format json --alsologtostderr:
I0620 18:09:04.137665  315046 out.go:291] Setting OutFile to fd 1 ...
I0620 18:09:04.137847  315046 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0620 18:09:04.137876  315046 out.go:304] Setting ErrFile to fd 2...
I0620 18:09:04.137920  315046 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0620 18:09:04.138771  315046 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19106-274269/.minikube/bin
I0620 18:09:04.139525  315046 config.go:182] Loaded profile config "functional-979723": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.2
I0620 18:09:04.139703  315046 config.go:182] Loaded profile config "functional-979723": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.2
I0620 18:09:04.140257  315046 cli_runner.go:164] Run: docker container inspect functional-979723 --format={{.State.Status}}
I0620 18:09:04.160371  315046 ssh_runner.go:195] Run: systemctl --version
I0620 18:09:04.160550  315046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-979723
I0620 18:09:04.205976  315046 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/19106-274269/.minikube/machines/functional-979723/id_rsa Username:docker}
I0620 18:09:04.314539  315046 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-979723 image ls --format yaml --alsologtostderr:
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:c7dd04b1bafeb51c650fde7f34ac0fdafa96030e77ea7a822135ff302d895dd5
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.2
size: "17643200"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:89d73d416b992e8f9602b67b4614d9e7f0655aebb3696e18efec695e0b654c40
repoDigests:
- docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8
repoTags:
- docker.io/kindest/kindnetd:v20240513-cd2ac642
size: "25795292"
- id: sha256:11ceee7cdc57225711b8382e1965974bbb259de14a9f5f7d6b9f161ced50a10a
repoDigests:
- docker.io/library/nginx@sha256:56b388b0d79c738f4cf51bbaf184a14fab19337f4819ceb2cae7d94100262de8
repoTags:
- docker.io/library/nginx:latest
size: "67668479"
- id: sha256:e1dcc3400d3ea6a268c7ea6e66c3a196703770a8e346b695f54344ab53a47567
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.2
size: "28368865"
- id: sha256:66dbb96a9149f69913ff817f696be766014cacdffc2ce0889a76c81165415fae
repoDigests:
- registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec
repoTags:
- registry.k8s.io/kube-proxy:v1.30.2
size: "25633111"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "268051"
- id: sha256:4f49228258b642594e55baf98d153d0e85f3fb989c1eb8450c520ed77bf27e65
repoDigests:
- docker.io/library/nginx@sha256:69f8c2c72671490607f52122be2af27d4fc09657ff57e42045801aa93d2090f7
repoTags:
- docker.io/library/nginx:alpine
size: "20199152"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd
repoDigests:
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "66189079"
- id: sha256:859a91eeec91652596348529046058e2aee35ae52d1eaf432bd19881f86ac372
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-979723
size: "991"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "16482581"
- id: sha256:84c601f3f72c87776cdcf77a73329d1f45297e43a92508b0f289fa2fcf8872a0
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.2
size: "29937230"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-979723 image ls --format yaml --alsologtostderr:
I0620 18:09:03.830247  314985 out.go:291] Setting OutFile to fd 1 ...
I0620 18:09:03.831086  314985 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0620 18:09:03.831097  314985 out.go:304] Setting ErrFile to fd 2...
I0620 18:09:03.831103  314985 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0620 18:09:03.831373  314985 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19106-274269/.minikube/bin
I0620 18:09:03.832103  314985 config.go:182] Loaded profile config "functional-979723": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.2
I0620 18:09:03.832220  314985 config.go:182] Loaded profile config "functional-979723": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.2
I0620 18:09:03.832677  314985 cli_runner.go:164] Run: docker container inspect functional-979723 --format={{.State.Status}}
I0620 18:09:03.853012  314985 ssh_runner.go:195] Run: systemctl --version
I0620 18:09:03.853086  314985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-979723
I0620 18:09:03.876670  314985 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/19106-274269/.minikube/machines/functional-979723/id_rsa Username:docker}
I0620 18:09:03.968311  314985 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-979723 ssh pgrep buildkitd: exit status 1 (350.331495ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 image build -t localhost/my-image:functional-979723 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-979723 image build -t localhost/my-image:functional-979723 testdata/build --alsologtostderr: (2.186850359s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-979723 image build -t localhost/my-image:functional-979723 testdata/build --alsologtostderr:
I0620 18:09:04.454655  315123 out.go:291] Setting OutFile to fd 1 ...
I0620 18:09:04.455356  315123 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0620 18:09:04.455391  315123 out.go:304] Setting ErrFile to fd 2...
I0620 18:09:04.455398  315123 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0620 18:09:04.455790  315123 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19106-274269/.minikube/bin
I0620 18:09:04.456651  315123 config.go:182] Loaded profile config "functional-979723": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.2
I0620 18:09:04.457594  315123 config.go:182] Loaded profile config "functional-979723": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.2
I0620 18:09:04.458479  315123 cli_runner.go:164] Run: docker container inspect functional-979723 --format={{.State.Status}}
I0620 18:09:04.481521  315123 ssh_runner.go:195] Run: systemctl --version
I0620 18:09:04.481609  315123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-979723
I0620 18:09:04.505434  315123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/19106-274269/.minikube/machines/functional-979723/id_rsa Username:docker}
I0620 18:09:04.605159  315123 build_images.go:161] Building image from path: /tmp/build.2740744886.tar
I0620 18:09:04.605237  315123 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0620 18:09:04.621496  315123 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2740744886.tar
I0620 18:09:04.626338  315123 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2740744886.tar: stat -c "%s %y" /var/lib/minikube/build/build.2740744886.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2740744886.tar': No such file or directory
I0620 18:09:04.626366  315123 ssh_runner.go:362] scp /tmp/build.2740744886.tar --> /var/lib/minikube/build/build.2740744886.tar (3072 bytes)
I0620 18:09:04.660118  315123 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2740744886
I0620 18:09:04.668877  315123 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2740744886 -xf /var/lib/minikube/build/build.2740744886.tar
I0620 18:09:04.678437  315123 containerd.go:394] Building image: /var/lib/minikube/build/build.2740744886
I0620 18:09:04.678516  315123 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2740744886 --local dockerfile=/var/lib/minikube/build/build.2740744886 --output type=image,name=localhost/my-image:functional-979723
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.7s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.2s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:4c565e2bb2fe92754ea71b59b3ee1fac60b7727212356c99df3270e27c7ea308
#8 exporting manifest sha256:4c565e2bb2fe92754ea71b59b3ee1fac60b7727212356c99df3270e27c7ea308 0.0s done
#8 exporting config sha256:9171396841f6380fe613ccc32de982723cad197c64bf68cc6165f2a3e4071af3 0.0s done
#8 naming to localhost/my-image:functional-979723 done
#8 DONE 0.2s
I0620 18:09:06.532807  315123 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2740744886 --local dockerfile=/var/lib/minikube/build/build.2740744886 --output type=image,name=localhost/my-image:functional-979723: (1.854254307s)
I0620 18:09:06.532890  315123 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2740744886
I0620 18:09:06.543648  315123 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2740744886.tar
I0620 18:09:06.554250  315123 build_images.go:217] Built localhost/my-image:functional-979723 from /tmp/build.2740744886.tar
I0620 18:09:06.554293  315123 build_images.go:133] succeeded building to: functional-979723
I0620 18:09:06.554299  315123 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
2024/06/20 18:08:47 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.816463842s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-979723
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.84s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 image rm gcr.io/google-containers/addon-resizer:functional-979723 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-979723
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-979723 image save --daemon gcr.io/google-containers/addon-resizer:functional-979723 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-979723
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.62s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-979723
--- PASS: TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-979723
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-979723
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (132.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-264076 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0620 18:09:14.354098  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/addons-527088/client.crt: no such file or directory
E0620 18:09:42.041985  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/addons-527088/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-264076 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (2m11.449019683s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-264076 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (132.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (28.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-264076 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-264076 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-264076 -- rollout status deployment/busybox: (25.407956829s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-264076 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-264076 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-264076 -- exec busybox-fc5497c4f-2jl94 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-264076 -- exec busybox-fc5497c4f-725cr -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-264076 -- exec busybox-fc5497c4f-h24d2 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-264076 -- exec busybox-fc5497c4f-2jl94 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-264076 -- exec busybox-fc5497c4f-725cr -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-264076 -- exec busybox-fc5497c4f-h24d2 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-264076 -- exec busybox-fc5497c4f-2jl94 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-264076 -- exec busybox-fc5497c4f-725cr -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-264076 -- exec busybox-fc5497c4f-h24d2 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (28.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-264076 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-264076 -- exec busybox-fc5497c4f-2jl94 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-264076 -- exec busybox-fc5497c4f-2jl94 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-264076 -- exec busybox-fc5497c4f-725cr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-264076 -- exec busybox-fc5497c4f-725cr -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-264076 -- exec busybox-fc5497c4f-h24d2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-264076 -- exec busybox-fc5497c4f-h24d2 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (20.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-264076 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-264076 -v=7 --alsologtostderr: (19.914897094s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-264076 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-264076 status -v=7 --alsologtostderr: (1.023562145s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (20.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-264076 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-264076 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-264076 cp testdata/cp-test.txt ha-264076:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-264076 ssh -n ha-264076 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-264076 cp ha-264076:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1422685930/001/cp-test_ha-264076.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-264076 ssh -n ha-264076 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-264076 cp ha-264076:/home/docker/cp-test.txt ha-264076-m02:/home/docker/cp-test_ha-264076_ha-264076-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-264076 ssh -n ha-264076 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-264076 ssh -n ha-264076-m02 "sudo cat /home/docker/cp-test_ha-264076_ha-264076-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-264076 cp ha-264076:/home/docker/cp-test.txt ha-264076-m03:/home/docker/cp-test_ha-264076_ha-264076-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-264076 ssh -n ha-264076 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-264076 ssh -n ha-264076-m03 "sudo cat /home/docker/cp-test_ha-264076_ha-264076-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-264076 cp ha-264076:/home/docker/cp-test.txt ha-264076-m04:/home/docker/cp-test_ha-264076_ha-264076-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-264076 ssh -n ha-264076 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-264076 ssh -n ha-264076-m04 "sudo cat /home/docker/cp-test_ha-264076_ha-264076-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-264076 cp testdata/cp-test.txt ha-264076-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-264076 ssh -n ha-264076-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-264076 cp ha-264076-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1422685930/001/cp-test_ha-264076-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-264076 ssh -n ha-264076-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-264076 cp ha-264076-m02:/home/docker/cp-test.txt ha-264076:/home/docker/cp-test_ha-264076-m02_ha-264076.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-264076 ssh -n ha-264076-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-264076 ssh -n ha-264076 "sudo cat /home/docker/cp-test_ha-264076-m02_ha-264076.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-264076 cp ha-264076-m02:/home/docker/cp-test.txt ha-264076-m03:/home/docker/cp-test_ha-264076-m02_ha-264076-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-264076 ssh -n ha-264076-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-264076 ssh -n ha-264076-m03 "sudo cat /home/docker/cp-test_ha-264076-m02_ha-264076-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-264076 cp ha-264076-m02:/home/docker/cp-test.txt ha-264076-m04:/home/docker/cp-test_ha-264076-m02_ha-264076-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-264076 ssh -n ha-264076-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-264076 ssh -n ha-264076-m04 "sudo cat /home/docker/cp-test_ha-264076-m02_ha-264076-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-264076 cp testdata/cp-test.txt ha-264076-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-264076 ssh -n ha-264076-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-264076 cp ha-264076-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1422685930/001/cp-test_ha-264076-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-264076 ssh -n ha-264076-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-264076 cp ha-264076-m03:/home/docker/cp-test.txt ha-264076:/home/docker/cp-test_ha-264076-m03_ha-264076.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-264076 ssh -n ha-264076-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-264076 ssh -n ha-264076 "sudo cat /home/docker/cp-test_ha-264076-m03_ha-264076.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-264076 cp ha-264076-m03:/home/docker/cp-test.txt ha-264076-m02:/home/docker/cp-test_ha-264076-m03_ha-264076-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-264076 ssh -n ha-264076-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-264076 ssh -n ha-264076-m02 "sudo cat /home/docker/cp-test_ha-264076-m03_ha-264076-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-264076 cp ha-264076-m03:/home/docker/cp-test.txt ha-264076-m04:/home/docker/cp-test_ha-264076-m03_ha-264076-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-264076 ssh -n ha-264076-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-264076 ssh -n ha-264076-m04 "sudo cat /home/docker/cp-test_ha-264076-m03_ha-264076-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-264076 cp testdata/cp-test.txt ha-264076-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-264076 ssh -n ha-264076-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-264076 cp ha-264076-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1422685930/001/cp-test_ha-264076-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-264076 ssh -n ha-264076-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-264076 cp ha-264076-m04:/home/docker/cp-test.txt ha-264076:/home/docker/cp-test_ha-264076-m04_ha-264076.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-264076 ssh -n ha-264076-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-264076 ssh -n ha-264076 "sudo cat /home/docker/cp-test_ha-264076-m04_ha-264076.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-264076 cp ha-264076-m04:/home/docker/cp-test.txt ha-264076-m02:/home/docker/cp-test_ha-264076-m04_ha-264076-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-264076 ssh -n ha-264076-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-264076 ssh -n ha-264076-m02 "sudo cat /home/docker/cp-test_ha-264076-m04_ha-264076-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-264076 cp ha-264076-m04:/home/docker/cp-test.txt ha-264076-m03:/home/docker/cp-test_ha-264076-m04_ha-264076-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-264076 ssh -n ha-264076-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-264076 ssh -n ha-264076-m03 "sudo cat /home/docker/cp-test_ha-264076-m04_ha-264076-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-264076 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-264076 node stop m02 -v=7 --alsologtostderr: (12.050677132s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-264076 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-264076 status -v=7 --alsologtostderr: exit status 7 (850.069508ms)

                                                
                                                
-- stdout --
	ha-264076
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-264076-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-264076-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-264076-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0620 18:12:44.871551  330527 out.go:291] Setting OutFile to fd 1 ...
	I0620 18:12:44.872801  330527 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0620 18:12:44.872817  330527 out.go:304] Setting ErrFile to fd 2...
	I0620 18:12:44.872823  330527 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0620 18:12:44.873118  330527 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19106-274269/.minikube/bin
	I0620 18:12:44.873363  330527 out.go:298] Setting JSON to false
	I0620 18:12:44.873398  330527 mustload.go:65] Loading cluster: ha-264076
	I0620 18:12:44.873526  330527 notify.go:220] Checking for updates...
	I0620 18:12:44.873843  330527 config.go:182] Loaded profile config "ha-264076": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.2
	I0620 18:12:44.873858  330527 status.go:255] checking status of ha-264076 ...
	I0620 18:12:44.874734  330527 cli_runner.go:164] Run: docker container inspect ha-264076 --format={{.State.Status}}
	I0620 18:12:44.898530  330527 status.go:330] ha-264076 host status = "Running" (err=<nil>)
	I0620 18:12:44.898552  330527 host.go:66] Checking if "ha-264076" exists ...
	I0620 18:12:44.898993  330527 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-264076
	I0620 18:12:44.926281  330527 host.go:66] Checking if "ha-264076" exists ...
	I0620 18:12:44.926683  330527 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0620 18:12:44.926808  330527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-264076
	I0620 18:12:44.946066  330527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19106-274269/.minikube/machines/ha-264076/id_rsa Username:docker}
	I0620 18:12:45.073093  330527 ssh_runner.go:195] Run: systemctl --version
	I0620 18:12:45.079988  330527 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0620 18:12:45.101256  330527 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0620 18:12:45.187978  330527 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:true NGoroutines:72 SystemTime:2024-06-20 18:12:45.17629315 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1063-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214892544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d2d58213f83a351ca8f528a95fbd145f5654e957 Expected:d2d58213f83a351ca8f528a95fbd145f5654e957} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.1]] Warnings:<nil>}}
	I0620 18:12:45.188679  330527 kubeconfig.go:125] found "ha-264076" server: "https://192.168.49.254:8443"
	I0620 18:12:45.188710  330527 api_server.go:166] Checking apiserver status ...
	I0620 18:12:45.188770  330527 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0620 18:12:45.205858  330527 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1435/cgroup
	I0620 18:12:45.224264  330527 api_server.go:182] apiserver freezer: "11:freezer:/docker/8e98c4297e0de214c937a30309ca8b0cc473588b7b96ef443b53e360e0ae34e9/kubepods/burstable/pod654693f1a57442a28da6f2bba40fbb2f/9b85e497a7e5f19e563ed7311b187d5288a520357751053d80bf47b39945b529"
	I0620 18:12:45.224391  330527 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/8e98c4297e0de214c937a30309ca8b0cc473588b7b96ef443b53e360e0ae34e9/kubepods/burstable/pod654693f1a57442a28da6f2bba40fbb2f/9b85e497a7e5f19e563ed7311b187d5288a520357751053d80bf47b39945b529/freezer.state
	I0620 18:12:45.253508  330527 api_server.go:204] freezer state: "THAWED"
	I0620 18:12:45.253545  330527 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0620 18:12:45.262910  330527 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0620 18:12:45.262947  330527 status.go:422] ha-264076 apiserver status = Running (err=<nil>)
	I0620 18:12:45.262986  330527 status.go:257] ha-264076 status: &{Name:ha-264076 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0620 18:12:45.263067  330527 status.go:255] checking status of ha-264076-m02 ...
	I0620 18:12:45.263471  330527 cli_runner.go:164] Run: docker container inspect ha-264076-m02 --format={{.State.Status}}
	I0620 18:12:45.285192  330527 status.go:330] ha-264076-m02 host status = "Stopped" (err=<nil>)
	I0620 18:12:45.285217  330527 status.go:343] host is not running, skipping remaining checks
	I0620 18:12:45.285224  330527 status.go:257] ha-264076-m02 status: &{Name:ha-264076-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0620 18:12:45.285246  330527 status.go:255] checking status of ha-264076-m03 ...
	I0620 18:12:45.285628  330527 cli_runner.go:164] Run: docker container inspect ha-264076-m03 --format={{.State.Status}}
	I0620 18:12:45.306563  330527 status.go:330] ha-264076-m03 host status = "Running" (err=<nil>)
	I0620 18:12:45.306592  330527 host.go:66] Checking if "ha-264076-m03" exists ...
	I0620 18:12:45.306941  330527 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-264076-m03
	I0620 18:12:45.326896  330527 host.go:66] Checking if "ha-264076-m03" exists ...
	I0620 18:12:45.327348  330527 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0620 18:12:45.327563  330527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-264076-m03
	I0620 18:12:45.346330  330527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/19106-274269/.minikube/machines/ha-264076-m03/id_rsa Username:docker}
	I0620 18:12:45.440421  330527 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0620 18:12:45.454189  330527 kubeconfig.go:125] found "ha-264076" server: "https://192.168.49.254:8443"
	I0620 18:12:45.454216  330527 api_server.go:166] Checking apiserver status ...
	I0620 18:12:45.454258  330527 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0620 18:12:45.469448  330527 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1447/cgroup
	I0620 18:12:45.479054  330527 api_server.go:182] apiserver freezer: "11:freezer:/docker/d88df2f4ec4d98c737433c5eeccf4d2666ffa96f37732b5b85bf0c0015c236c7/kubepods/burstable/pod49f6efb93f2374bf3b3f87553d5fc210/6af8f5db3ca71fee3ff2ecaf347cab03453eb7460ad59e871634c729e32d13be"
	I0620 18:12:45.479143  330527 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/d88df2f4ec4d98c737433c5eeccf4d2666ffa96f37732b5b85bf0c0015c236c7/kubepods/burstable/pod49f6efb93f2374bf3b3f87553d5fc210/6af8f5db3ca71fee3ff2ecaf347cab03453eb7460ad59e871634c729e32d13be/freezer.state
	I0620 18:12:45.487663  330527 api_server.go:204] freezer state: "THAWED"
	I0620 18:12:45.487705  330527 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0620 18:12:45.495776  330527 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0620 18:12:45.495806  330527 status.go:422] ha-264076-m03 apiserver status = Running (err=<nil>)
	I0620 18:12:45.495829  330527 status.go:257] ha-264076-m03 status: &{Name:ha-264076-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0620 18:12:45.495850  330527 status.go:255] checking status of ha-264076-m04 ...
	I0620 18:12:45.496159  330527 cli_runner.go:164] Run: docker container inspect ha-264076-m04 --format={{.State.Status}}
	I0620 18:12:45.513597  330527 status.go:330] ha-264076-m04 host status = "Running" (err=<nil>)
	I0620 18:12:45.513626  330527 host.go:66] Checking if "ha-264076-m04" exists ...
	I0620 18:12:45.513941  330527 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-264076-m04
	I0620 18:12:45.531174  330527 host.go:66] Checking if "ha-264076-m04" exists ...
	I0620 18:12:45.531546  330527 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0620 18:12:45.531615  330527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-264076-m04
	I0620 18:12:45.548497  330527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/19106-274269/.minikube/machines/ha-264076-m04/id_rsa Username:docker}
	I0620 18:12:45.644153  330527 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0620 18:12:45.656173  330527 status.go:257] ha-264076-m04 status: &{Name:ha-264076-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (27.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-264076 node start m02 -v=7 --alsologtostderr
E0620 18:13:06.145755  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/functional-979723/client.crt: no such file or directory
E0620 18:13:06.151086  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/functional-979723/client.crt: no such file or directory
E0620 18:13:06.161368  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/functional-979723/client.crt: no such file or directory
E0620 18:13:06.181683  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/functional-979723/client.crt: no such file or directory
E0620 18:13:06.221906  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/functional-979723/client.crt: no such file or directory
E0620 18:13:06.302207  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/functional-979723/client.crt: no such file or directory
E0620 18:13:06.462672  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/functional-979723/client.crt: no such file or directory
E0620 18:13:06.783336  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/functional-979723/client.crt: no such file or directory
E0620 18:13:07.423556  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/functional-979723/client.crt: no such file or directory
E0620 18:13:08.704204  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/functional-979723/client.crt: no such file or directory
E0620 18:13:11.264725  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/functional-979723/client.crt: no such file or directory
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-264076 node start m02 -v=7 --alsologtostderr: (26.56096787s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-264076 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-264076 status -v=7 --alsologtostderr: (1.058495291s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (27.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (128.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-264076 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-264076 -v=7 --alsologtostderr
E0620 18:13:16.387112  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/functional-979723/client.crt: no such file or directory
E0620 18:13:26.627336  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/functional-979723/client.crt: no such file or directory
E0620 18:13:47.108240  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/functional-979723/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-264076 -v=7 --alsologtostderr: (37.165696434s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-264076 --wait=true -v=7 --alsologtostderr
E0620 18:14:14.353202  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/addons-527088/client.crt: no such file or directory
E0620 18:14:28.069456  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/functional-979723/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-264076 --wait=true -v=7 --alsologtostderr: (1m31.114445584s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-264076
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (128.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-264076 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-264076 node delete m03 -v=7 --alsologtostderr: (10.718460293s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-264076 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-264076 stop -v=7 --alsologtostderr
E0620 18:15:49.989678  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/functional-979723/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-264076 stop -v=7 --alsologtostderr: (35.878373633s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-264076 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-264076 status -v=7 --alsologtostderr: exit status 7 (109.37858ms)

                                                
                                                
-- stdout --
	ha-264076
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-264076-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-264076-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0620 18:16:11.378199  344236 out.go:291] Setting OutFile to fd 1 ...
	I0620 18:16:11.378337  344236 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0620 18:16:11.378347  344236 out.go:304] Setting ErrFile to fd 2...
	I0620 18:16:11.378352  344236 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0620 18:16:11.378602  344236 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19106-274269/.minikube/bin
	I0620 18:16:11.378778  344236 out.go:298] Setting JSON to false
	I0620 18:16:11.378810  344236 mustload.go:65] Loading cluster: ha-264076
	I0620 18:16:11.378908  344236 notify.go:220] Checking for updates...
	I0620 18:16:11.379258  344236 config.go:182] Loaded profile config "ha-264076": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.2
	I0620 18:16:11.379277  344236 status.go:255] checking status of ha-264076 ...
	I0620 18:16:11.379775  344236 cli_runner.go:164] Run: docker container inspect ha-264076 --format={{.State.Status}}
	I0620 18:16:11.397344  344236 status.go:330] ha-264076 host status = "Stopped" (err=<nil>)
	I0620 18:16:11.397368  344236 status.go:343] host is not running, skipping remaining checks
	I0620 18:16:11.397375  344236 status.go:257] ha-264076 status: &{Name:ha-264076 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0620 18:16:11.397405  344236 status.go:255] checking status of ha-264076-m02 ...
	I0620 18:16:11.397739  344236 cli_runner.go:164] Run: docker container inspect ha-264076-m02 --format={{.State.Status}}
	I0620 18:16:11.416620  344236 status.go:330] ha-264076-m02 host status = "Stopped" (err=<nil>)
	I0620 18:16:11.416644  344236 status.go:343] host is not running, skipping remaining checks
	I0620 18:16:11.416661  344236 status.go:257] ha-264076-m02 status: &{Name:ha-264076-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0620 18:16:11.416684  344236 status.go:255] checking status of ha-264076-m04 ...
	I0620 18:16:11.416985  344236 cli_runner.go:164] Run: docker container inspect ha-264076-m04 --format={{.State.Status}}
	I0620 18:16:11.437476  344236 status.go:330] ha-264076-m04 host status = "Stopped" (err=<nil>)
	I0620 18:16:11.437497  344236 status.go:343] host is not running, skipping remaining checks
	I0620 18:16:11.437505  344236 status.go:257] ha-264076-m04 status: &{Name:ha-264076-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (70.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-264076 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-264076 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m9.120027684s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-264076 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (70.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (45.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-264076 --control-plane -v=7 --alsologtostderr
E0620 18:18:06.143891  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/functional-979723/client.crt: no such file or directory
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-264076 --control-plane -v=7 --alsologtostderr: (44.115186651s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-264076 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (45.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.73s)

                                                
                                    
x
+
TestJSONOutput/start/Command (58.52s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-991033 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0620 18:18:33.829971  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/functional-979723/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-991033 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (58.514030127s)
--- PASS: TestJSONOutput/start/Command (58.52s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.73s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-991033 --output=json --user=testUser
E0620 18:19:14.353819  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/addons-527088/client.crt: no such file or directory
--- PASS: TestJSONOutput/pause/Command (0.73s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-991033 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.75s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-991033 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-991033 --output=json --user=testUser: (5.747025102s)
--- PASS: TestJSONOutput/stop/Command (5.75s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-563853 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-563853 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (88.490665ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6f9a315a-b2d3-4b70-940c-d636fd1fd5df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-563853] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"21fd2223-aacc-4872-af7a-0e9ec84bfdec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19106"}}
	{"specversion":"1.0","id":"fc2f3513-f885-43ba-9bd9-78c93d625c8e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c15ade62-f58a-47e9-9910-e90505394af9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19106-274269/kubeconfig"}}
	{"specversion":"1.0","id":"034d7ced-c0e8-43b4-9d9b-294075170c9b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19106-274269/.minikube"}}
	{"specversion":"1.0","id":"0fba2ed7-119f-420b-8a27-50c4cc0057c0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"bd150cc7-428c-459c-b5d8-aa182cebc5bd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c2229a5d-6a5b-4670-96ed-eb89e8a08ab0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-563853" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-563853
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (38.47s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-103667 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-103667 --network=: (36.345185327s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-103667" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-103667
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-103667: (2.10450722s)
--- PASS: TestKicCustomNetwork/create_custom_network (38.47s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (35.44s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-268661 --network=bridge
E0620 18:20:37.402553  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/addons-527088/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-268661 --network=bridge: (33.523065987s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-268661" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-268661
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-268661: (1.896589981s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (35.44s)

                                                
                                    
x
+
TestKicExistingNetwork (33.55s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-271857 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-271857 --network=existing-network: (31.421209693s)
helpers_test.go:175: Cleaning up "existing-network-271857" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-271857
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-271857: (1.981366745s)
--- PASS: TestKicExistingNetwork (33.55s)

                                                
                                    
x
+
TestKicCustomSubnet (34.43s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-035958 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-035958 --subnet=192.168.60.0/24: (32.328288323s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-035958 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-035958" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-035958
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-035958: (2.080512709s)
--- PASS: TestKicCustomSubnet (34.43s)

                                                
                                    
x
+
TestKicStaticIP (37.25s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-269271 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-269271 --static-ip=192.168.200.200: (35.006472884s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-269271 ip
helpers_test.go:175: Cleaning up "static-ip-269271" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-269271
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-269271: (2.106390172s)
--- PASS: TestKicStaticIP (37.25s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (63.57s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-969737 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-969737 --driver=docker  --container-runtime=containerd: (29.565357115s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-972576 --driver=docker  --container-runtime=containerd
E0620 18:23:06.144229  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/functional-979723/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-972576 --driver=docker  --container-runtime=containerd: (28.9181957s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-969737
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-972576
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-972576" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-972576
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-972576: (1.922622383s)
helpers_test.go:175: Cleaning up "first-969737" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-969737
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-969737: (1.945283542s)
--- PASS: TestMinikubeProfile (63.57s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.87s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-848416 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-848416 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.865903924s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.87s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-848416 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.12s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-861807 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-861807 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.114761521s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.12s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-861807 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-848416 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-848416 --alsologtostderr -v=5: (1.613436227s)
--- PASS: TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-861807 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-861807
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-861807: (1.198740267s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.57s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-861807
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-861807: (6.573016893s)
--- PASS: TestMountStart/serial/RestartStopped (7.57s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-861807 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (76.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-892855 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0620 18:24:14.353818  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/addons-527088/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-892855 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m16.474539731s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-892855 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (76.98s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-892855 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-892855 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-892855 -- rollout status deployment/busybox: (3.375893494s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-892855 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-892855 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-892855 -- exec busybox-fc5497c4f-nqj9f -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-892855 -- exec busybox-fc5497c4f-p4vzx -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-892855 -- exec busybox-fc5497c4f-nqj9f -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-892855 -- exec busybox-fc5497c4f-p4vzx -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-892855 -- exec busybox-fc5497c4f-nqj9f -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-892855 -- exec busybox-fc5497c4f-p4vzx -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.19s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-892855 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-892855 -- exec busybox-fc5497c4f-nqj9f -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-892855 -- exec busybox-fc5497c4f-nqj9f -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-892855 -- exec busybox-fc5497c4f-p4vzx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-892855 -- exec busybox-fc5497c4f-p4vzx -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.98s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (16.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-892855 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-892855 -v 3 --alsologtostderr: (16.155069784s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-892855 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (16.80s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-892855 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.33s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-892855 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-892855 cp testdata/cp-test.txt multinode-892855:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-892855 ssh -n multinode-892855 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-892855 cp multinode-892855:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1521681916/001/cp-test_multinode-892855.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-892855 ssh -n multinode-892855 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-892855 cp multinode-892855:/home/docker/cp-test.txt multinode-892855-m02:/home/docker/cp-test_multinode-892855_multinode-892855-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-892855 ssh -n multinode-892855 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-892855 ssh -n multinode-892855-m02 "sudo cat /home/docker/cp-test_multinode-892855_multinode-892855-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-892855 cp multinode-892855:/home/docker/cp-test.txt multinode-892855-m03:/home/docker/cp-test_multinode-892855_multinode-892855-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-892855 ssh -n multinode-892855 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-892855 ssh -n multinode-892855-m03 "sudo cat /home/docker/cp-test_multinode-892855_multinode-892855-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-892855 cp testdata/cp-test.txt multinode-892855-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-892855 ssh -n multinode-892855-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-892855 cp multinode-892855-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1521681916/001/cp-test_multinode-892855-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-892855 ssh -n multinode-892855-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-892855 cp multinode-892855-m02:/home/docker/cp-test.txt multinode-892855:/home/docker/cp-test_multinode-892855-m02_multinode-892855.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-892855 ssh -n multinode-892855-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-892855 ssh -n multinode-892855 "sudo cat /home/docker/cp-test_multinode-892855-m02_multinode-892855.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-892855 cp multinode-892855-m02:/home/docker/cp-test.txt multinode-892855-m03:/home/docker/cp-test_multinode-892855-m02_multinode-892855-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-892855 ssh -n multinode-892855-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-892855 ssh -n multinode-892855-m03 "sudo cat /home/docker/cp-test_multinode-892855-m02_multinode-892855-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-892855 cp testdata/cp-test.txt multinode-892855-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-892855 ssh -n multinode-892855-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-892855 cp multinode-892855-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1521681916/001/cp-test_multinode-892855-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-892855 ssh -n multinode-892855-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-892855 cp multinode-892855-m03:/home/docker/cp-test.txt multinode-892855:/home/docker/cp-test_multinode-892855-m03_multinode-892855.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-892855 ssh -n multinode-892855-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-892855 ssh -n multinode-892855 "sudo cat /home/docker/cp-test_multinode-892855-m03_multinode-892855.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-892855 cp multinode-892855-m03:/home/docker/cp-test.txt multinode-892855-m02:/home/docker/cp-test_multinode-892855-m03_multinode-892855-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-892855 ssh -n multinode-892855-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-892855 ssh -n multinode-892855-m02 "sudo cat /home/docker/cp-test_multinode-892855-m03_multinode-892855-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.89s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-892855 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-892855 node stop m03: (1.210096084s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-892855 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-892855 status: exit status 7 (505.418315ms)

                                                
                                                
-- stdout --
	multinode-892855
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-892855-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-892855-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-892855 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-892855 status --alsologtostderr: exit status 7 (513.744743ms)

                                                
                                                
-- stdout --
	multinode-892855
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-892855-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-892855-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0620 18:25:46.531546  394543 out.go:291] Setting OutFile to fd 1 ...
	I0620 18:25:46.531737  394543 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0620 18:25:46.531841  394543 out.go:304] Setting ErrFile to fd 2...
	I0620 18:25:46.531871  394543 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0620 18:25:46.532150  394543 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19106-274269/.minikube/bin
	I0620 18:25:46.532368  394543 out.go:298] Setting JSON to false
	I0620 18:25:46.532431  394543 mustload.go:65] Loading cluster: multinode-892855
	I0620 18:25:46.532530  394543 notify.go:220] Checking for updates...
	I0620 18:25:46.532935  394543 config.go:182] Loaded profile config "multinode-892855": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.2
	I0620 18:25:46.532982  394543 status.go:255] checking status of multinode-892855 ...
	I0620 18:25:46.533750  394543 cli_runner.go:164] Run: docker container inspect multinode-892855 --format={{.State.Status}}
	I0620 18:25:46.553091  394543 status.go:330] multinode-892855 host status = "Running" (err=<nil>)
	I0620 18:25:46.553127  394543 host.go:66] Checking if "multinode-892855" exists ...
	I0620 18:25:46.553421  394543 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-892855
	I0620 18:25:46.593749  394543 host.go:66] Checking if "multinode-892855" exists ...
	I0620 18:25:46.594076  394543 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0620 18:25:46.594128  394543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-892855
	I0620 18:25:46.615990  394543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33285 SSHKeyPath:/home/jenkins/minikube-integration/19106-274269/.minikube/machines/multinode-892855/id_rsa Username:docker}
	I0620 18:25:46.709198  394543 ssh_runner.go:195] Run: systemctl --version
	I0620 18:25:46.713619  394543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0620 18:25:46.725021  394543 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0620 18:25:46.780135  394543 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:62 SystemTime:2024-06-20 18:25:46.770944869 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1063-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214892544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d2d58213f83a351ca8f528a95fbd145f5654e957 Expected:d2d58213f83a351ca8f528a95fbd145f5654e957} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.1]] Warnings:<nil>}}
	I0620 18:25:46.780770  394543 kubeconfig.go:125] found "multinode-892855" server: "https://192.168.67.2:8443"
	I0620 18:25:46.780795  394543 api_server.go:166] Checking apiserver status ...
	I0620 18:25:46.780836  394543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0620 18:25:46.791624  394543 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1401/cgroup
	I0620 18:25:46.801180  394543 api_server.go:182] apiserver freezer: "11:freezer:/docker/e1fe33a517fbf7b809644ce2aa19de73741d33570202a856400a4817f00d3555/kubepods/burstable/pod7d84fcb10336a86d5880bd2fa0bb3763/b574af7c55391c1e8d12917a04c07247d374c5c30820cde3cbcb030f1b1bd41e"
	I0620 18:25:46.801255  394543 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/e1fe33a517fbf7b809644ce2aa19de73741d33570202a856400a4817f00d3555/kubepods/burstable/pod7d84fcb10336a86d5880bd2fa0bb3763/b574af7c55391c1e8d12917a04c07247d374c5c30820cde3cbcb030f1b1bd41e/freezer.state
	I0620 18:25:46.809788  394543 api_server.go:204] freezer state: "THAWED"
	I0620 18:25:46.809824  394543 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0620 18:25:46.817322  394543 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0620 18:25:46.817349  394543 status.go:422] multinode-892855 apiserver status = Running (err=<nil>)
	I0620 18:25:46.817362  394543 status.go:257] multinode-892855 status: &{Name:multinode-892855 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0620 18:25:46.817385  394543 status.go:255] checking status of multinode-892855-m02 ...
	I0620 18:25:46.817698  394543 cli_runner.go:164] Run: docker container inspect multinode-892855-m02 --format={{.State.Status}}
	I0620 18:25:46.833077  394543 status.go:330] multinode-892855-m02 host status = "Running" (err=<nil>)
	I0620 18:25:46.833104  394543 host.go:66] Checking if "multinode-892855-m02" exists ...
	I0620 18:25:46.833412  394543 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-892855-m02
	I0620 18:25:46.849524  394543 host.go:66] Checking if "multinode-892855-m02" exists ...
	I0620 18:25:46.849849  394543 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0620 18:25:46.849895  394543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-892855-m02
	I0620 18:25:46.865570  394543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33290 SSHKeyPath:/home/jenkins/minikube-integration/19106-274269/.minikube/machines/multinode-892855-m02/id_rsa Username:docker}
	I0620 18:25:46.960278  394543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0620 18:25:46.972377  394543 status.go:257] multinode-892855-m02 status: &{Name:multinode-892855-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0620 18:25:46.972418  394543 status.go:255] checking status of multinode-892855-m03 ...
	I0620 18:25:46.972822  394543 cli_runner.go:164] Run: docker container inspect multinode-892855-m03 --format={{.State.Status}}
	I0620 18:25:46.989914  394543 status.go:330] multinode-892855-m03 host status = "Stopped" (err=<nil>)
	I0620 18:25:46.989939  394543 status.go:343] host is not running, skipping remaining checks
	I0620 18:25:46.989948  394543 status.go:257] multinode-892855-m03 status: &{Name:multinode-892855-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.23s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-892855 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-892855 node start m03 -v=7 --alsologtostderr: (8.787463389s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-892855 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.55s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (82.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-892855
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-892855
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-892855: (24.968603275s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-892855 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-892855 --wait=true -v=8 --alsologtostderr: (57.633604602s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-892855
--- PASS: TestMultiNode/serial/RestartKeepsNodes (82.74s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-892855 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-892855 node delete m03: (4.728483704s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-892855 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.42s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-892855 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-892855 stop: (23.843264833s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-892855 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-892855 status: exit status 7 (93.06901ms)

                                                
                                                
-- stdout --
	multinode-892855
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-892855-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-892855 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-892855 status --alsologtostderr: exit status 7 (91.641559ms)

                                                
                                                
-- stdout --
	multinode-892855
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-892855-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0620 18:27:48.685418  402180 out.go:291] Setting OutFile to fd 1 ...
	I0620 18:27:48.685609  402180 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0620 18:27:48.685638  402180 out.go:304] Setting ErrFile to fd 2...
	I0620 18:27:48.685661  402180 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0620 18:27:48.685946  402180 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19106-274269/.minikube/bin
	I0620 18:27:48.686161  402180 out.go:298] Setting JSON to false
	I0620 18:27:48.686245  402180 mustload.go:65] Loading cluster: multinode-892855
	I0620 18:27:48.686329  402180 notify.go:220] Checking for updates...
	I0620 18:27:48.686802  402180 config.go:182] Loaded profile config "multinode-892855": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.2
	I0620 18:27:48.687157  402180 status.go:255] checking status of multinode-892855 ...
	I0620 18:27:48.687731  402180 cli_runner.go:164] Run: docker container inspect multinode-892855 --format={{.State.Status}}
	I0620 18:27:48.704155  402180 status.go:330] multinode-892855 host status = "Stopped" (err=<nil>)
	I0620 18:27:48.704176  402180 status.go:343] host is not running, skipping remaining checks
	I0620 18:27:48.704184  402180 status.go:257] multinode-892855 status: &{Name:multinode-892855 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0620 18:27:48.704211  402180 status.go:255] checking status of multinode-892855-m02 ...
	I0620 18:27:48.704512  402180 cli_runner.go:164] Run: docker container inspect multinode-892855-m02 --format={{.State.Status}}
	I0620 18:27:48.724657  402180 status.go:330] multinode-892855-m02 host status = "Stopped" (err=<nil>)
	I0620 18:27:48.724682  402180 status.go:343] host is not running, skipping remaining checks
	I0620 18:27:48.724690  402180 status.go:257] multinode-892855-m02 status: &{Name:multinode-892855-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.03s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (50.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-892855 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0620 18:28:06.144263  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/functional-979723/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-892855 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (49.829696147s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-892855 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (50.50s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (35.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-892855
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-892855-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-892855-m02 --driver=docker  --container-runtime=containerd: exit status 14 (88.759673ms)

                                                
                                                
-- stdout --
	* [multinode-892855-m02] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19106
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19106-274269/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19106-274269/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-892855-m02' is duplicated with machine name 'multinode-892855-m02' in profile 'multinode-892855'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-892855-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-892855-m03 --driver=docker  --container-runtime=containerd: (33.518779048s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-892855
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-892855: exit status 80 (324.453707ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-892855 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-892855-m03 already exists in multinode-892855-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-892855-m03
E0620 18:29:14.354057  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/addons-527088/client.crt: no such file or directory
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-892855-m03: (1.996873513s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (35.99s)

                                                
                                    
x
+
TestPreload (120.54s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-459925 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
E0620 18:29:29.190184  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/functional-979723/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-459925 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m21.084852863s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-459925 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-459925 image pull gcr.io/k8s-minikube/busybox: (1.1977215s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-459925
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-459925: (12.112135366s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-459925 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-459925 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (23.479104014s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-459925 image list
helpers_test.go:175: Cleaning up "test-preload-459925" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-459925
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-459925: (2.312231294s)
--- PASS: TestPreload (120.54s)

                                                
                                    
x
+
TestScheduledStopUnix (106.34s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-801358 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-801358 --memory=2048 --driver=docker  --container-runtime=containerd: (29.956433253s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-801358 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-801358 -n scheduled-stop-801358
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-801358 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-801358 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-801358 -n scheduled-stop-801358
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-801358
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-801358 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-801358
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-801358: exit status 7 (72.398809ms)

                                                
                                                
-- stdout --
	scheduled-stop-801358
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-801358 -n scheduled-stop-801358
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-801358 -n scheduled-stop-801358: exit status 7 (68.66286ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-801358" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-801358
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-801358: (4.897019102s)
--- PASS: TestScheduledStopUnix (106.34s)

                                                
                                    
x
+
TestInsufficientStorage (10.15s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-443065 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
E0620 18:33:06.144013  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/functional-979723/client.crt: no such file or directory
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-443065 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (7.697231006s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8dda2ebd-98c4-4034-b23e-038293cc7e23","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-443065] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"85c6ed33-a172-4dc7-8fe7-7606a4f2ab13","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19106"}}
	{"specversion":"1.0","id":"4fbdf6c7-e4b6-4961-9255-c2f41ddf4d21","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"05924542-7cff-4815-8543-f6eea4591118","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19106-274269/kubeconfig"}}
	{"specversion":"1.0","id":"187a8b2e-d8ac-4241-a56d-3cc1be4435d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19106-274269/.minikube"}}
	{"specversion":"1.0","id":"d128a08d-f0d4-4324-a81e-a12969d59604","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"7a2c5159-19ad-4b2b-9735-ffe8988e7012","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"cbf418d9-2d07-45ef-8682-c28a61f2f9fc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"86096924-0d9f-445d-85bd-f8847fb8dcce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"8f206a4f-9ea7-4975-9035-304f3bd1780d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"bc266ad4-e38c-4a7e-b4bd-9cddde0a833d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"63a0e198-5614-45dc-bd1a-d96a53b5f69c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-443065\" primary control-plane node in \"insufficient-storage-443065\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"5125c8c8-5812-47ea-966b-5855f2988643","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.44-1718753665-19106 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"e62b6d62-c2dd-4b18-acee-1f5159c545c8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"0df76697-44ea-43e4-b7e3-95b45c768ea3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-443065 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-443065 --output=json --layout=cluster: exit status 7 (292.434556ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-443065","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-443065","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0620 18:33:13.959696  419918 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-443065" does not appear in /home/jenkins/minikube-integration/19106-274269/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-443065 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-443065 --output=json --layout=cluster: exit status 7 (283.516302ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-443065","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-443065","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0620 18:33:14.243825  419970 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-443065" does not appear in /home/jenkins/minikube-integration/19106-274269/kubeconfig
	E0620 18:33:14.253843  419970 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/insufficient-storage-443065/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-443065" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-443065
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-443065: (1.874333824s)
--- PASS: TestInsufficientStorage (10.15s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (82.76s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1613113652 start -p running-upgrade-057725 --memory=2200 --vm-driver=docker  --container-runtime=containerd
E0620 18:38:06.143806  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/functional-979723/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1613113652 start -p running-upgrade-057725 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (44.812977795s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-057725 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-057725 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (34.659422185s)
helpers_test.go:175: Cleaning up "running-upgrade-057725" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-057725
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-057725: (2.609484283s)
--- PASS: TestRunningBinaryUpgrade (82.76s)

                                                
                                    
x
+
TestKubernetesUpgrade (351.16s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-321306 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-321306 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (57.667067276s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-321306
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-321306: (1.760256112s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-321306 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-321306 status --format={{.Host}}: exit status 7 (78.227066ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-321306 --memory=2200 --kubernetes-version=v1.30.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-321306 --memory=2200 --kubernetes-version=v1.30.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m42.210525372s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-321306 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-321306 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-321306 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (96.476406ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-321306] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19106
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19106-274269/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19106-274269/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-321306
	    minikube start -p kubernetes-upgrade-321306 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3213062 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.2, by running:
	    
	    minikube start -p kubernetes-upgrade-321306 --kubernetes-version=v1.30.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-321306 --memory=2200 --kubernetes-version=v1.30.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-321306 --memory=2200 --kubernetes-version=v1.30.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.642876356s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-321306" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-321306
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-321306: (2.56237832s)
--- PASS: TestKubernetesUpgrade (351.16s)

                                                
                                    
x
+
TestMissingContainerUpgrade (174.91s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.2463200604 start -p missing-upgrade-854365 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.2463200604 start -p missing-upgrade-854365 --memory=2200 --driver=docker  --container-runtime=containerd: (1m26.643299166s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-854365
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-854365: (10.351672479s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-854365
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-854365 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-854365 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m15.056410468s)
helpers_test.go:175: Cleaning up "missing-upgrade-854365" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-854365
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-854365: (2.004388159s)
--- PASS: TestMissingContainerUpgrade (174.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-048227 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-048227 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (81.79904ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-048227] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19106
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19106-274269/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19106-274269/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (39.78s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-048227 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-048227 --driver=docker  --container-runtime=containerd: (39.42309442s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-048227 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (39.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-048227 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-048227 --no-kubernetes --driver=docker  --container-runtime=containerd: (15.234116796s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-048227 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-048227 status -o json: exit status 2 (299.057189ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-048227","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-048227
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-048227: (1.840387894s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.78s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-048227 --no-kubernetes --driver=docker  --container-runtime=containerd
E0620 18:34:14.353789  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/addons-527088/client.crt: no such file or directory
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-048227 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.778502506s)
--- PASS: TestNoKubernetes/serial/Start (6.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-048227 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-048227 "sudo systemctl is-active --quiet service kubelet": exit status 1 (320.307008ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-048227
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-048227: (1.27318379s)
--- PASS: TestNoKubernetes/serial/Stop (1.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-048227 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-048227 --driver=docker  --container-runtime=containerd: (7.548159113s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-048227 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-048227 "sudo systemctl is-active --quiet service kubelet": exit status 1 (395.152102ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.40s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.68s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.68s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (88.75s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.392123702 start -p stopped-upgrade-534802 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.392123702 start -p stopped-upgrade-534802 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (46.032405318s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.392123702 -p stopped-upgrade-534802 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.392123702 -p stopped-upgrade-534802 stop: (1.287260911s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-534802 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0620 18:37:17.402834  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/addons-527088/client.crt: no such file or directory
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-534802 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (41.423309366s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (88.75s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.34s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-534802
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-534802: (1.336368421s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.34s)

                                                
                                    
x
+
TestPause/serial/Start (64.23s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-734712 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
E0620 18:39:14.353775  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/addons-527088/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-734712 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m4.228878555s)
--- PASS: TestPause/serial/Start (64.23s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.18s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-734712 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-734712 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.165244586s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.18s)

                                                
                                    
x
+
TestPause/serial/Pause (0.98s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-734712 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.98s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.49s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-734712 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-734712 --output=json --layout=cluster: exit status 2 (486.266576ms)

                                                
                                                
-- stdout --
	{"Name":"pause-734712","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-734712","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.49s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.89s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-734712 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.89s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.25s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-734712 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-734712 --alsologtostderr -v=5: (1.25242616s)
--- PASS: TestPause/serial/PauseAgain (1.25s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.89s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-734712 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-734712 --alsologtostderr -v=5: (2.888119173s)
--- PASS: TestPause/serial/DeletePaused (2.89s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.18s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-734712
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-734712: exit status 1 (16.538907ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-734712: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-355113 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-355113 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (233.840646ms)

                                                
                                                
-- stdout --
	* [false-355113] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19106
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19106-274269/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19106-274269/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0620 18:40:30.418327  456943 out.go:291] Setting OutFile to fd 1 ...
	I0620 18:40:30.418440  456943 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0620 18:40:30.418445  456943 out.go:304] Setting ErrFile to fd 2...
	I0620 18:40:30.418449  456943 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0620 18:40:30.418694  456943 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19106-274269/.minikube/bin
	I0620 18:40:30.419547  456943 out.go:298] Setting JSON to false
	I0620 18:40:30.421068  456943 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8581,"bootTime":1718900250,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1063-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0620 18:40:30.421190  456943 start.go:139] virtualization:  
	I0620 18:40:30.423686  456943 out.go:177] * [false-355113] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0620 18:40:30.425848  456943 out.go:177]   - MINIKUBE_LOCATION=19106
	I0620 18:40:30.425986  456943 notify.go:220] Checking for updates...
	I0620 18:40:30.430721  456943 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0620 18:40:30.433463  456943 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19106-274269/kubeconfig
	I0620 18:40:30.436482  456943 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19106-274269/.minikube
	I0620 18:40:30.438557  456943 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0620 18:40:30.445796  456943 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0620 18:40:30.448417  456943 config.go:182] Loaded profile config "force-systemd-flag-939130": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.2
	I0620 18:40:30.448529  456943 driver.go:392] Setting default libvirt URI to qemu:///system
	I0620 18:40:30.473240  456943 docker.go:122] docker version: linux-26.1.4:Docker Engine - Community
	I0620 18:40:30.473370  456943 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0620 18:40:30.571533  456943 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:52 SystemTime:2024-06-20 18:40:30.553653536 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1063-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214892544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d2d58213f83a351ca8f528a95fbd145f5654e957 Expected:d2d58213f83a351ca8f528a95fbd145f5654e957} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.1]] Warnings:<nil>}}
	I0620 18:40:30.571650  456943 docker.go:295] overlay module found
	I0620 18:40:30.574232  456943 out.go:177] * Using the docker driver based on user configuration
	I0620 18:40:30.576249  456943 start.go:297] selected driver: docker
	I0620 18:40:30.576308  456943 start.go:901] validating driver "docker" against <nil>
	I0620 18:40:30.576326  456943 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0620 18:40:30.579836  456943 out.go:177] 
	W0620 18:40:30.581963  456943 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0620 18:40:30.584143  456943 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-355113 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-355113

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-355113

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-355113

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-355113

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-355113

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-355113

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-355113

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-355113

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-355113

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-355113

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355113"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355113"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355113"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-355113

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355113"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355113"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-355113" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-355113" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-355113" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-355113" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-355113" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-355113" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-355113" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-355113" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355113"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355113"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355113"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355113"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355113"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-355113" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-355113" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-355113" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355113"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355113"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355113"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355113"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355113"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-355113

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355113"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355113"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355113"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355113"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355113"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355113"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355113"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355113"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355113"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355113"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355113"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355113"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355113"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355113"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355113"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355113"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355113"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355113"

                                                
                                                
----------------------- debugLogs end: false-355113 [took: 4.66371825s] --------------------------------
helpers_test.go:175: Cleaning up "false-355113" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-355113
--- PASS: TestNetworkPlugins/group/false (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (174.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-337794 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0620 18:43:06.144236  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/functional-979723/client.crt: no such file or directory
E0620 18:44:14.353851  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/addons-527088/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-337794 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m54.096558902s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (174.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.66s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-337794 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [cd5df00c-f7d5-457f-89f8-3b29912bf9fd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [cd5df00c-f7d5-457f-89f8-3b29912bf9fd] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003339491s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-337794 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.66s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (75.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-530880 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-530880 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.2: (1m15.378744484s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (75.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-337794 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-337794 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.175370337s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-337794 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (14.63s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-337794 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-337794 --alsologtostderr -v=3: (14.632065407s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (14.63s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-337794 -n old-k8s-version-337794
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-337794 -n old-k8s-version-337794: exit status 7 (77.085073ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-337794 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-530880 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [71c599eb-b323-40bc-9e36-3ea2af16584b] Pending
helpers_test.go:344: "busybox" [71c599eb-b323-40bc-9e36-3ea2af16584b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [71c599eb-b323-40bc-9e36-3ea2af16584b] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.009622577s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-530880 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-530880 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-530880 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.062187729s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-530880 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-530880 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-530880 --alsologtostderr -v=3: (12.074238815s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-530880 -n no-preload-530880
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-530880 -n no-preload-530880: exit status 7 (74.613593ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-530880 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (297.68s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-530880 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.2
E0620 18:48:06.144281  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/functional-979723/client.crt: no such file or directory
E0620 18:49:14.353619  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/addons-527088/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-530880 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.2: (4m57.283159921s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-530880 -n no-preload-530880
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (297.68s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-9j6rr" [5075a7cb-0d4a-4aa2-b78f-beef3c057a29] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003623268s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-q5n4n" [06b6ae0e-5442-4a38-82bf-97a3e230fcfb] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003963814s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-9j6rr" [5075a7cb-0d4a-4aa2-b78f-beef3c057a29] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004804352s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-337794 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-q5n4n" [06b6ae0e-5442-4a38-82bf-97a3e230fcfb] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00400232s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-530880 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-337794 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240513-cd2ac642
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-337794 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-337794 -n old-k8s-version-337794
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-337794 -n old-k8s-version-337794: exit status 2 (363.254923ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-337794 -n old-k8s-version-337794
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-337794 -n old-k8s-version-337794: exit status 2 (392.896203ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-337794 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-337794 -n old-k8s-version-337794
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-337794 -n old-k8s-version-337794
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.48s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-530880 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240513-cd2ac642
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.75s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-530880 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p no-preload-530880 --alsologtostderr -v=1: (1.036385054s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-530880 -n no-preload-530880
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-530880 -n no-preload-530880: exit status 2 (482.902761ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-530880 -n no-preload-530880
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-530880 -n no-preload-530880: exit status 2 (477.949758ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-530880 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-530880 -n no-preload-530880
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-530880 -n no-preload-530880
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.75s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (95.7s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-903607 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-903607 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.2: (1m35.696550495s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (95.70s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (98.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-716345 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.2
E0620 18:53:06.144268  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/functional-979723/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-716345 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.2: (1m38.2166118s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (98.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-903607 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [36171a72-1f4a-45df-b7e7-a074799753fe] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [36171a72-1f4a-45df-b7e7-a074799753fe] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003019862s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-903607 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-716345 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1d5b61fb-8286-4ff4-bd57-8be0202cbdd8] Pending
helpers_test.go:344: "busybox" [1d5b61fb-8286-4ff4-bd57-8be0202cbdd8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [1d5b61fb-8286-4ff4-bd57-8be0202cbdd8] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004312449s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-716345 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-903607 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-903607 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.017840995s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-903607 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-903607 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-903607 --alsologtostderr -v=3: (12.014820195s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-716345 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-716345 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-716345 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-716345 --alsologtostderr -v=3: (12.08258767s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-903607 -n embed-certs-903607
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-903607 -n embed-certs-903607: exit status 7 (67.677818ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-903607 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (269.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-903607 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-903607 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.2: (4m29.616573807s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-903607 -n embed-certs-903607
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (269.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-716345 -n default-k8s-diff-port-716345
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-716345 -n default-k8s-diff-port-716345: exit status 7 (90.312799ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-716345 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (271.68s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-716345 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.2
E0620 18:53:57.403080  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/addons-527088/client.crt: no such file or directory
E0620 18:54:14.353273  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/addons-527088/client.crt: no such file or directory
E0620 18:54:52.892820  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/old-k8s-version-337794/client.crt: no such file or directory
E0620 18:54:52.898284  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/old-k8s-version-337794/client.crt: no such file or directory
E0620 18:54:52.908552  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/old-k8s-version-337794/client.crt: no such file or directory
E0620 18:54:52.928867  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/old-k8s-version-337794/client.crt: no such file or directory
E0620 18:54:52.969207  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/old-k8s-version-337794/client.crt: no such file or directory
E0620 18:54:53.049682  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/old-k8s-version-337794/client.crt: no such file or directory
E0620 18:54:53.210016  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/old-k8s-version-337794/client.crt: no such file or directory
E0620 18:54:53.530256  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/old-k8s-version-337794/client.crt: no such file or directory
E0620 18:54:54.170822  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/old-k8s-version-337794/client.crt: no such file or directory
E0620 18:54:55.451394  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/old-k8s-version-337794/client.crt: no such file or directory
E0620 18:54:58.011626  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/old-k8s-version-337794/client.crt: no such file or directory
E0620 18:55:03.132129  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/old-k8s-version-337794/client.crt: no such file or directory
E0620 18:55:13.372519  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/old-k8s-version-337794/client.crt: no such file or directory
E0620 18:55:33.853315  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/old-k8s-version-337794/client.crt: no such file or directory
E0620 18:56:11.806357  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/no-preload-530880/client.crt: no such file or directory
E0620 18:56:11.811726  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/no-preload-530880/client.crt: no such file or directory
E0620 18:56:11.821954  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/no-preload-530880/client.crt: no such file or directory
E0620 18:56:11.842286  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/no-preload-530880/client.crt: no such file or directory
E0620 18:56:11.882560  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/no-preload-530880/client.crt: no such file or directory
E0620 18:56:11.962878  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/no-preload-530880/client.crt: no such file or directory
E0620 18:56:12.123283  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/no-preload-530880/client.crt: no such file or directory
E0620 18:56:12.443886  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/no-preload-530880/client.crt: no such file or directory
E0620 18:56:13.084974  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/no-preload-530880/client.crt: no such file or directory
E0620 18:56:14.365125  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/no-preload-530880/client.crt: no such file or directory
E0620 18:56:14.813577  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/old-k8s-version-337794/client.crt: no such file or directory
E0620 18:56:16.925246  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/no-preload-530880/client.crt: no such file or directory
E0620 18:56:22.045600  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/no-preload-530880/client.crt: no such file or directory
E0620 18:56:32.285896  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/no-preload-530880/client.crt: no such file or directory
E0620 18:56:52.766777  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/no-preload-530880/client.crt: no such file or directory
E0620 18:57:33.727212  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/no-preload-530880/client.crt: no such file or directory
E0620 18:57:36.734576  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/old-k8s-version-337794/client.crt: no such file or directory
E0620 18:58:06.143815  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/functional-979723/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-716345 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.2: (4m31.294555363s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-716345 -n default-k8s-diff-port-716345
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (271.68s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-hbwp9" [fb332d98-f746-429c-b20b-ba93d097d264] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004304991s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-hbwp9" [fb332d98-f746-429c-b20b-ba93d097d264] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004384514s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-903607 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-2c9d7" [368d1298-a937-4a27-ad8e-ed133d15090c] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005042358s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-2c9d7" [368d1298-a937-4a27-ad8e-ed133d15090c] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004227809s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-716345 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-903607 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240513-cd2ac642
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-903607 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-903607 -n embed-certs-903607
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-903607 -n embed-certs-903607: exit status 2 (371.850418ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-903607 -n embed-certs-903607
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-903607 -n embed-certs-903607: exit status 2 (299.708988ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-903607 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-903607 -n embed-certs-903607
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-903607 -n embed-certs-903607
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-716345 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240513-cd2ac642
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.87s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-716345 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-716345 -n default-k8s-diff-port-716345
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-716345 -n default-k8s-diff-port-716345: exit status 2 (397.138339ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-716345 -n default-k8s-diff-port-716345
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-716345 -n default-k8s-diff-port-716345: exit status 2 (399.347717ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-716345 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-716345 -n default-k8s-diff-port-716345
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-716345 -n default-k8s-diff-port-716345
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.87s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (49.59s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-648707 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-648707 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.2: (49.585798066s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (49.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (90.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-355113 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
E0620 18:58:55.648267  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/no-preload-530880/client.crt: no such file or directory
E0620 18:59:14.353759  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/addons-527088/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-355113 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m30.128600167s)
--- PASS: TestNetworkPlugins/group/auto/Start (90.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-648707 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-648707 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.082345751s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-648707 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-648707 --alsologtostderr -v=3: (1.217360823s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-648707 -n newest-cni-648707
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-648707 -n newest-cni-648707: exit status 7 (70.61161ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-648707 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (17.69s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-648707 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-648707 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.2: (17.337966374s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-648707 -n newest-cni-648707
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (17.69s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-648707 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240513-cd2ac642
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.04s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-648707 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-648707 -n newest-cni-648707
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-648707 -n newest-cni-648707: exit status 2 (310.412483ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-648707 -n newest-cni-648707
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-648707 -n newest-cni-648707: exit status 2 (307.958584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-648707 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-648707 -n newest-cni-648707
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-648707 -n newest-cni-648707
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (90.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-355113 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
E0620 18:59:52.893318  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/old-k8s-version-337794/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-355113 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m30.082773482s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (90.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-355113 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-355113 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-vwgh6" [971f5d11-482a-40ba-86ff-f8a197ebde86] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-vwgh6" [971f5d11-482a-40ba-86ff-f8a197ebde86] Running
E0620 19:00:20.575327  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/old-k8s-version-337794/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.003798359s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-355113 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-355113 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-355113 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (70.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-355113 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
E0620 19:01:11.806114  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/no-preload-530880/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-355113 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m10.925584178s)
--- PASS: TestNetworkPlugins/group/calico/Start (70.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-t2sms" [23a77ae1-6e7a-46dc-961d-c3215cab1ddb] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004887497s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-355113 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-355113 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-77k5h" [eb12c779-1f5e-4046-b764-07471cf02629] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-77k5h" [eb12c779-1f5e-4046-b764-07471cf02629] Running
E0620 19:01:39.489108  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/no-preload-530880/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.005487252s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-355113 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-355113 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-355113 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-xjfxd" [5ee6c4f3-2cec-4019-8c92-eda57380d738] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005621504s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (68.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-355113 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-355113 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (1m8.599658761s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (68.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-355113 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-355113 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-lfqll" [d9303c74-afef-42f4-87e0-c572e88b4c60] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-lfqll" [d9303c74-afef-42f4-87e0-c572e88b4c60] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.003490107s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-355113 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-355113 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-355113 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (48.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-355113 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
E0620 19:02:49.191461  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/functional-979723/client.crt: no such file or directory
E0620 19:03:06.143537  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/functional-979723/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-355113 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (48.982391241s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (48.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-355113 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-355113 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-cl6td" [00f2b5e4-0db5-43ef-a581-6b8be570edfe] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-cl6td" [00f2b5e4-0db5-43ef-a581-6b8be570edfe] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.003120996s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-355113 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-355113 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-355113 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-355113 "pgrep -a kubelet"
E0620 19:03:29.488578  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/default-k8s-diff-port-716345/client.crt: no such file or directory
E0620 19:03:29.568709  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/default-k8s-diff-port-716345/client.crt: no such file or directory
E0620 19:03:29.729031  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/default-k8s-diff-port-716345/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-355113 replace --force -f testdata/netcat-deployment.yaml
E0620 19:03:30.049211  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/default-k8s-diff-port-716345/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-mc6vd" [925f18c4-98b9-4fc6-b4be-fee3534a2971] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0620 19:03:30.689734  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/default-k8s-diff-port-716345/client.crt: no such file or directory
E0620 19:03:31.970110  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/default-k8s-diff-port-716345/client.crt: no such file or directory
E0620 19:03:34.531067  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/default-k8s-diff-port-716345/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-mc6vd" [925f18c4-98b9-4fc6-b4be-fee3534a2971] Running
E0620 19:03:39.651789  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/default-k8s-diff-port-716345/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 13.003642133s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-355113 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-355113 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-355113 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (66.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-355113 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-355113 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (1m6.643042474s)
--- PASS: TestNetworkPlugins/group/flannel/Start (66.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (55.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-355113 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
E0620 19:04:10.372750  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/default-k8s-diff-port-716345/client.crt: no such file or directory
E0620 19:04:14.353840  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/addons-527088/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-355113 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (55.740173409s)
--- PASS: TestNetworkPlugins/group/bridge/Start (55.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-pz8br" [5dca87fa-69e0-49d7-8087-8cb59a544e51] Running
E0620 19:04:51.333019  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/default-k8s-diff-port-716345/client.crt: no such file or directory
E0620 19:04:52.892394  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/old-k8s-version-337794/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004367286s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-355113 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-355113 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-gp4fk" [be7132fd-8400-4682-9679-d0487b8e33c6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-gp4fk" [be7132fd-8400-4682-9679-d0487b8e33c6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.004197049s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-355113 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-355113 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-67ztn" [b96578f4-d36d-4cc2-bea8-15791993930a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-67ztn" [b96578f4-d36d-4cc2-bea8-15791993930a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003645799s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-355113 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-355113 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-355113 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-355113 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-355113 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E0620 19:05:15.064584  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/auto-355113/client.crt: no such file or directory
E0620 19:05:15.070988  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/auto-355113/client.crt: no such file or directory
E0620 19:05:15.081473  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/auto-355113/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-355113 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0620 19:05:15.102462  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/auto-355113/client.crt: no such file or directory
E0620 19:05:15.142751  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/auto-355113/client.crt: no such file or directory
E0620 19:05:15.223235  279671 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19106-274269/.minikube/profiles/auto-355113/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.23s)

                                                
                                    

Test skip (28/328)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.56s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-230526 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-230526" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-230526
--- SKIP: TestDownloadOnlyKic (0.56s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-994151" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-994151
--- SKIP: TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-355113 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-355113

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-355113

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-355113

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-355113

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-355113

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-355113

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-355113

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-355113

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-355113

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-355113

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355113"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355113"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355113"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-355113

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355113"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355113"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-355113" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-355113" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-355113" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-355113" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-355113" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-355113" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-355113" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-355113" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355113"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355113"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355113"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355113"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355113"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-355113" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-355113" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-355113" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355113"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355113"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355113"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355113"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355113"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-355113

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355113"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355113"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355113"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355113"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355113"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355113"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355113"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355113"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355113"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355113"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355113"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355113"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355113"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355113"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355113"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355113"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355113"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355113"

                                                
                                                
----------------------- debugLogs end: kubenet-355113 [took: 4.875588503s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-355113" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-355113
--- SKIP: TestNetworkPlugins/group/kubenet (5.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-355113 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-355113

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-355113

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-355113

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-355113

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-355113

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-355113

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-355113

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-355113

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-355113

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-355113

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355113"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355113"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355113"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-355113

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355113"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355113"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-355113" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-355113" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-355113" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-355113" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-355113" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-355113" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-355113" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-355113" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355113"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355113"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355113"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355113"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355113"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-355113

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-355113

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-355113" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-355113" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-355113

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-355113

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-355113" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-355113" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-355113" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-355113" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-355113" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355113"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355113"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355113"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355113"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355113"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-355113

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355113"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355113"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355113"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355113"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355113"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355113"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355113"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355113"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355113"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355113"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355113"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355113"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355113"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355113"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355113"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355113"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355113"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-355113" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355113"

                                                
                                                
----------------------- debugLogs end: cilium-355113 [took: 4.671824615s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-355113" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-355113
--- SKIP: TestNetworkPlugins/group/cilium (4.84s)

                                                
                                    
Copied to clipboard