Test Report: Docker_Linux_containerd_arm64 18551

                    
                      1118682035abaed82942a21ae2e13e14d2fd3192:2024-04-01:33835
                    
                

Test fail (7/335)

x
+
TestAddons/parallel/Ingress (37.33s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-126557 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-126557 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-126557 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [9a5fbb1d-68e9-4fcf-90c0-9b5fab333b7d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [9a5fbb1d-68e9-4fcf-90c0-9b5fab333b7d] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.030916801s
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-126557 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-126557 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-126557 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:297: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.059823433s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:299: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:303: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:306: (dbg) Run:  out/minikube-linux-arm64 -p addons-126557 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-arm64 -p addons-126557 addons disable ingress-dns --alsologtostderr -v=1: (1.397913016s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-arm64 -p addons-126557 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-arm64 -p addons-126557 addons disable ingress --alsologtostderr -v=1: (7.75283581s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-126557
helpers_test.go:235: (dbg) docker inspect addons-126557:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3086df2a8c6a081dc52c2ede3362e7126f1e5d8560fdd12e1c3caedb39b00e67",
	        "Created": "2024-04-01T10:27:53.443190252Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 447047,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-04-01T10:27:53.703461078Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:d0f05b8b802e4c4af20a90d686bad8329f07849a8fda1b1d1c1dc3f527691df0",
	        "ResolvConfPath": "/var/lib/docker/containers/3086df2a8c6a081dc52c2ede3362e7126f1e5d8560fdd12e1c3caedb39b00e67/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3086df2a8c6a081dc52c2ede3362e7126f1e5d8560fdd12e1c3caedb39b00e67/hostname",
	        "HostsPath": "/var/lib/docker/containers/3086df2a8c6a081dc52c2ede3362e7126f1e5d8560fdd12e1c3caedb39b00e67/hosts",
	        "LogPath": "/var/lib/docker/containers/3086df2a8c6a081dc52c2ede3362e7126f1e5d8560fdd12e1c3caedb39b00e67/3086df2a8c6a081dc52c2ede3362e7126f1e5d8560fdd12e1c3caedb39b00e67-json.log",
	        "Name": "/addons-126557",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-126557:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-126557",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b0eb7310f7718ec9493fad8caeab807e9f767cfc33fe8d151b2d88eb5710a645-init/diff:/var/lib/docker/overlay2/65e26a120eed9f31cb763816aea149af9d6db48117d016131d4955e22e308b16/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b0eb7310f7718ec9493fad8caeab807e9f767cfc33fe8d151b2d88eb5710a645/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b0eb7310f7718ec9493fad8caeab807e9f767cfc33fe8d151b2d88eb5710a645/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b0eb7310f7718ec9493fad8caeab807e9f767cfc33fe8d151b2d88eb5710a645/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-126557",
	                "Source": "/var/lib/docker/volumes/addons-126557/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-126557",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-126557",
	                "name.minikube.sigs.k8s.io": "addons-126557",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f17e094ea37cc2add5251b9492496aca8f7dc185907c80e3e19ab65fa78272f1",
	            "SandboxKey": "/var/run/docker/netns/f17e094ea37c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33167"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33166"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33163"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33165"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33164"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-126557": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "38d3940061625d9091c661aa273f72beb5a72564d0c0406752aca3d821c25f12",
	                    "EndpointID": "f4c12e713dadfc0090413d071f9715233ee61108bbf5341401684f0c58803bb3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "addons-126557",
	                        "3086df2a8c6a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-126557 -n addons-126557
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-126557 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-126557 logs -n 25: (1.467439971s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|----------------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|----------------|---------------------|---------------------|
	| delete  | -p download-only-006513                                                                     | download-only-006513   | jenkins | v1.33.0-beta.0 | 01 Apr 24 10:27 UTC | 01 Apr 24 10:27 UTC |
	| delete  | -p download-only-919109                                                                     | download-only-919109   | jenkins | v1.33.0-beta.0 | 01 Apr 24 10:27 UTC | 01 Apr 24 10:27 UTC |
	| delete  | -p download-only-315775                                                                     | download-only-315775   | jenkins | v1.33.0-beta.0 | 01 Apr 24 10:27 UTC | 01 Apr 24 10:27 UTC |
	| start   | --download-only -p                                                                          | download-docker-971411 | jenkins | v1.33.0-beta.0 | 01 Apr 24 10:27 UTC |                     |
	|         | download-docker-971411                                                                      |                        |         |                |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |                |                     |                     |
	|         | --driver=docker                                                                             |                        |         |                |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |                |                     |                     |
	| delete  | -p download-docker-971411                                                                   | download-docker-971411 | jenkins | v1.33.0-beta.0 | 01 Apr 24 10:27 UTC | 01 Apr 24 10:27 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-553170   | jenkins | v1.33.0-beta.0 | 01 Apr 24 10:27 UTC |                     |
	|         | binary-mirror-553170                                                                        |                        |         |                |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |                |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |                |                     |                     |
	|         | http://127.0.0.1:34313                                                                      |                        |         |                |                     |                     |
	|         | --driver=docker                                                                             |                        |         |                |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |                |                     |                     |
	| delete  | -p binary-mirror-553170                                                                     | binary-mirror-553170   | jenkins | v1.33.0-beta.0 | 01 Apr 24 10:27 UTC | 01 Apr 24 10:27 UTC |
	| addons  | enable dashboard -p                                                                         | addons-126557          | jenkins | v1.33.0-beta.0 | 01 Apr 24 10:27 UTC |                     |
	|         | addons-126557                                                                               |                        |         |                |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-126557          | jenkins | v1.33.0-beta.0 | 01 Apr 24 10:27 UTC |                     |
	|         | addons-126557                                                                               |                        |         |                |                     |                     |
	| start   | -p addons-126557 --wait=true                                                                | addons-126557          | jenkins | v1.33.0-beta.0 | 01 Apr 24 10:27 UTC | 01 Apr 24 10:29 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |                |                     |                     |
	|         | --addons=registry                                                                           |                        |         |                |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |                |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |                |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |                |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |                |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |                |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |                |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |                |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |                |                     |                     |
	|         | --addons=yakd --driver=docker                                                               |                        |         |                |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |                |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |                |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |                |                     |                     |
	| ip      | addons-126557 ip                                                                            | addons-126557          | jenkins | v1.33.0-beta.0 | 01 Apr 24 10:29 UTC | 01 Apr 24 10:29 UTC |
	| addons  | addons-126557 addons disable                                                                | addons-126557          | jenkins | v1.33.0-beta.0 | 01 Apr 24 10:29 UTC | 01 Apr 24 10:29 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |                |                     |                     |
	|         | -v=1                                                                                        |                        |         |                |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-126557          | jenkins | v1.33.0-beta.0 | 01 Apr 24 10:29 UTC | 01 Apr 24 10:29 UTC |
	|         | -p addons-126557                                                                            |                        |         |                |                     |                     |
	| ssh     | addons-126557 ssh cat                                                                       | addons-126557          | jenkins | v1.33.0-beta.0 | 01 Apr 24 10:30 UTC | 01 Apr 24 10:30 UTC |
	|         | /opt/local-path-provisioner/pvc-e946bd4c-0d39-436e-a133-57feb23c806a_default_test-pvc/file1 |                        |         |                |                     |                     |
	| addons  | addons-126557 addons disable                                                                | addons-126557          | jenkins | v1.33.0-beta.0 | 01 Apr 24 10:30 UTC | 01 Apr 24 10:30 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |                |                     |                     |
	| addons  | addons-126557 addons                                                                        | addons-126557          | jenkins | v1.33.0-beta.0 | 01 Apr 24 10:30 UTC | 01 Apr 24 10:30 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |                |                     |                     |
	| addons  | addons-126557 addons                                                                        | addons-126557          | jenkins | v1.33.0-beta.0 | 01 Apr 24 10:30 UTC | 01 Apr 24 10:30 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |                |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-126557          | jenkins | v1.33.0-beta.0 | 01 Apr 24 10:30 UTC | 01 Apr 24 10:30 UTC |
	|         | addons-126557                                                                               |                        |         |                |                     |                     |
	| addons  | enable headlamp                                                                             | addons-126557          | jenkins | v1.33.0-beta.0 | 01 Apr 24 10:30 UTC | 01 Apr 24 10:30 UTC |
	|         | -p addons-126557                                                                            |                        |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |                |                     |                     |
	| addons  | addons-126557 addons                                                                        | addons-126557          | jenkins | v1.33.0-beta.0 | 01 Apr 24 10:30 UTC | 01 Apr 24 10:30 UTC |
	|         | disable metrics-server                                                                      |                        |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |                |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-126557          | jenkins | v1.33.0-beta.0 | 01 Apr 24 10:30 UTC | 01 Apr 24 10:31 UTC |
	|         | addons-126557                                                                               |                        |         |                |                     |                     |
	| ssh     | addons-126557 ssh curl -s                                                                   | addons-126557          | jenkins | v1.33.0-beta.0 | 01 Apr 24 10:31 UTC | 01 Apr 24 10:31 UTC |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |                |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |                |                     |                     |
	| ip      | addons-126557 ip                                                                            | addons-126557          | jenkins | v1.33.0-beta.0 | 01 Apr 24 10:31 UTC | 01 Apr 24 10:31 UTC |
	| addons  | addons-126557 addons disable                                                                | addons-126557          | jenkins | v1.33.0-beta.0 | 01 Apr 24 10:31 UTC | 01 Apr 24 10:31 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |                |                     |                     |
	|         | -v=1                                                                                        |                        |         |                |                     |                     |
	| addons  | addons-126557 addons disable                                                                | addons-126557          | jenkins | v1.33.0-beta.0 | 01 Apr 24 10:31 UTC | 01 Apr 24 10:31 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |                |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/01 10:27:29
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 10:27:29.014183  446597 out.go:291] Setting OutFile to fd 1 ...
	I0401 10:27:29.014341  446597 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 10:27:29.014355  446597 out.go:304] Setting ErrFile to fd 2...
	I0401 10:27:29.014411  446597 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 10:27:29.014726  446597 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18551-440344/.minikube/bin
	I0401 10:27:29.016255  446597 out.go:298] Setting JSON to false
	I0401 10:27:29.017352  446597 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7799,"bootTime":1711959450,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1056-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0401 10:27:29.017434  446597 start.go:139] virtualization:  
	I0401 10:27:29.028010  446597 out.go:177] * [addons-126557] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	I0401 10:27:29.037874  446597 out.go:177]   - MINIKUBE_LOCATION=18551
	I0401 10:27:29.037910  446597 notify.go:220] Checking for updates...
	I0401 10:27:29.056729  446597 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 10:27:29.077468  446597 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18551-440344/kubeconfig
	I0401 10:27:29.109207  446597 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18551-440344/.minikube
	I0401 10:27:29.139696  446597 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0401 10:27:29.171071  446597 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 10:27:29.204753  446597 driver.go:392] Setting default libvirt URI to qemu:///system
	I0401 10:27:29.224500  446597 docker.go:122] docker version: linux-26.0.0:Docker Engine - Community
	I0401 10:27:29.224644  446597 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 10:27:29.291233  446597 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-04-01 10:27:29.280213377 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215101440 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0401 10:27:29.291352  446597 docker.go:295] overlay module found
	I0401 10:27:29.315340  446597 out.go:177] * Using the docker driver based on user configuration
	I0401 10:27:29.348462  446597 start.go:297] selected driver: docker
	I0401 10:27:29.348497  446597 start.go:901] validating driver "docker" against <nil>
	I0401 10:27:29.348513  446597 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 10:27:29.349172  446597 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 10:27:29.415904  446597 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-04-01 10:27:29.397908894 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215101440 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0401 10:27:29.416084  446597 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0401 10:27:29.416355  446597 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 10:27:29.444641  446597 out.go:177] * Using Docker driver with root privileges
	I0401 10:27:29.476326  446597 cni.go:84] Creating CNI manager for ""
	I0401 10:27:29.476365  446597 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0401 10:27:29.476378  446597 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0401 10:27:29.476476  446597 start.go:340] cluster config:
	{Name:addons-126557 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-126557 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 10:27:29.508259  446597 out.go:177] * Starting "addons-126557" primary control-plane node in "addons-126557" cluster
	I0401 10:27:29.540873  446597 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0401 10:27:29.589827  446597 out.go:177] * Pulling base image v0.0.43-1711559786-18485 ...
	I0401 10:27:29.621948  446597 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime containerd
	I0401 10:27:29.622012  446597 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18551-440344/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-arm64.tar.lz4
	I0401 10:27:29.622042  446597 cache.go:56] Caching tarball of preloaded images
	I0401 10:27:29.622129  446597 preload.go:173] Found /home/jenkins/minikube-integration/18551-440344/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0401 10:27:29.622139  446597 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on containerd
	I0401 10:27:29.622364  446597 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 in local docker daemon
	I0401 10:27:29.622476  446597 profile.go:143] Saving config to /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/addons-126557/config.json ...
	I0401 10:27:29.622500  446597 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/addons-126557/config.json: {Name:mk3ecc4d86dede0e8abe49dba8328d9102d85cef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 10:27:29.635388  446597 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 to local cache
	I0401 10:27:29.635534  446597 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 in local cache directory
	I0401 10:27:29.635555  446597 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 in local cache directory, skipping pull
	I0401 10:27:29.635560  446597 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 exists in cache, skipping pull
	I0401 10:27:29.635569  446597 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 as a tarball
	I0401 10:27:29.635575  446597 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 from local cache
	I0401 10:27:46.261611  446597 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 from cached tarball
	I0401 10:27:46.261651  446597 cache.go:194] Successfully downloaded all kic artifacts
	I0401 10:27:46.261681  446597 start.go:360] acquireMachinesLock for addons-126557: {Name:mk2ec46b99afe876b925725d0b33c02f9946bde8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 10:27:46.261828  446597 start.go:364] duration metric: took 121.072µs to acquireMachinesLock for "addons-126557"
	I0401 10:27:46.261860  446597 start.go:93] Provisioning new machine with config: &{Name:addons-126557 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-126557 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0401 10:27:46.261954  446597 start.go:125] createHost starting for "" (driver="docker")
	I0401 10:27:46.264660  446597 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0401 10:27:46.264910  446597 start.go:159] libmachine.API.Create for "addons-126557" (driver="docker")
	I0401 10:27:46.264946  446597 client.go:168] LocalClient.Create starting
	I0401 10:27:46.265086  446597 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18551-440344/.minikube/certs/ca.pem
	I0401 10:27:46.807865  446597 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18551-440344/.minikube/certs/cert.pem
	I0401 10:27:47.130328  446597 cli_runner.go:164] Run: docker network inspect addons-126557 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0401 10:27:47.143628  446597 cli_runner.go:211] docker network inspect addons-126557 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0401 10:27:47.143757  446597 network_create.go:281] running [docker network inspect addons-126557] to gather additional debugging logs...
	I0401 10:27:47.143780  446597 cli_runner.go:164] Run: docker network inspect addons-126557
	W0401 10:27:47.159513  446597 cli_runner.go:211] docker network inspect addons-126557 returned with exit code 1
	I0401 10:27:47.159546  446597 network_create.go:284] error running [docker network inspect addons-126557]: docker network inspect addons-126557: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-126557 not found
	I0401 10:27:47.159560  446597 network_create.go:286] output of [docker network inspect addons-126557]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-126557 not found
	
	** /stderr **
	I0401 10:27:47.159686  446597 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 10:27:47.173409  446597 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4002598e50}
	I0401 10:27:47.173453  446597 network_create.go:124] attempt to create docker network addons-126557 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0401 10:27:47.173513  446597 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-126557 addons-126557
	I0401 10:27:47.238512  446597 network_create.go:108] docker network addons-126557 192.168.49.0/24 created
	I0401 10:27:47.238550  446597 kic.go:121] calculated static IP "192.168.49.2" for the "addons-126557" container
	I0401 10:27:47.238631  446597 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0401 10:27:47.251039  446597 cli_runner.go:164] Run: docker volume create addons-126557 --label name.minikube.sigs.k8s.io=addons-126557 --label created_by.minikube.sigs.k8s.io=true
	I0401 10:27:47.265781  446597 oci.go:103] Successfully created a docker volume addons-126557
	I0401 10:27:47.265881  446597 cli_runner.go:164] Run: docker run --rm --name addons-126557-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-126557 --entrypoint /usr/bin/test -v addons-126557:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 -d /var/lib
	I0401 10:27:49.171786  446597 cli_runner.go:217] Completed: docker run --rm --name addons-126557-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-126557 --entrypoint /usr/bin/test -v addons-126557:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 -d /var/lib: (1.90586107s)
	I0401 10:27:49.171821  446597 oci.go:107] Successfully prepared a docker volume addons-126557
	I0401 10:27:49.171850  446597 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime containerd
	I0401 10:27:49.171880  446597 kic.go:194] Starting extracting preloaded images to volume ...
	I0401 10:27:49.171964  446597 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18551-440344/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-126557:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 -I lz4 -xf /preloaded.tar -C /extractDir
	I0401 10:27:53.375991  446597 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18551-440344/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-126557:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 -I lz4 -xf /preloaded.tar -C /extractDir: (4.203984725s)
	I0401 10:27:53.376026  446597 kic.go:203] duration metric: took 4.204153951s to extract preloaded images to volume ...
	W0401 10:27:53.376168  446597 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0401 10:27:53.376286  446597 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0401 10:27:53.430050  446597 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-126557 --name addons-126557 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-126557 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-126557 --network addons-126557 --ip 192.168.49.2 --volume addons-126557:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82
	I0401 10:27:53.712307  446597 cli_runner.go:164] Run: docker container inspect addons-126557 --format={{.State.Running}}
	I0401 10:27:53.737027  446597 cli_runner.go:164] Run: docker container inspect addons-126557 --format={{.State.Status}}
	I0401 10:27:53.756412  446597 cli_runner.go:164] Run: docker exec addons-126557 stat /var/lib/dpkg/alternatives/iptables
	I0401 10:27:53.817545  446597 oci.go:144] the created container "addons-126557" has a running status.
	I0401 10:27:53.817579  446597 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18551-440344/.minikube/machines/addons-126557/id_rsa...
	I0401 10:27:54.921325  446597 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18551-440344/.minikube/machines/addons-126557/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0401 10:27:54.940210  446597 cli_runner.go:164] Run: docker container inspect addons-126557 --format={{.State.Status}}
	I0401 10:27:54.960772  446597 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0401 10:27:54.960795  446597 kic_runner.go:114] Args: [docker exec --privileged addons-126557 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0401 10:27:55.021877  446597 cli_runner.go:164] Run: docker container inspect addons-126557 --format={{.State.Status}}
	I0401 10:27:55.040504  446597 machine.go:94] provisionDockerMachine start ...
	I0401 10:27:55.040602  446597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-126557
	I0401 10:27:55.057714  446597 main.go:141] libmachine: Using SSH client type: native
	I0401 10:27:55.058002  446597 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 33167 <nil> <nil>}
	I0401 10:27:55.058013  446597 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 10:27:55.212802  446597 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-126557
	
	I0401 10:27:55.212828  446597 ubuntu.go:169] provisioning hostname "addons-126557"
	I0401 10:27:55.212894  446597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-126557
	I0401 10:27:55.230304  446597 main.go:141] libmachine: Using SSH client type: native
	I0401 10:27:55.230563  446597 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 33167 <nil> <nil>}
	I0401 10:27:55.230583  446597 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-126557 && echo "addons-126557" | sudo tee /etc/hostname
	I0401 10:27:55.382164  446597 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-126557
	
	I0401 10:27:55.382279  446597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-126557
	I0401 10:27:55.397303  446597 main.go:141] libmachine: Using SSH client type: native
	I0401 10:27:55.397552  446597 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 33167 <nil> <nil>}
	I0401 10:27:55.397580  446597 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-126557' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-126557/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-126557' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 10:27:55.533207  446597 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 10:27:55.533235  446597 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18551-440344/.minikube CaCertPath:/home/jenkins/minikube-integration/18551-440344/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18551-440344/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18551-440344/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18551-440344/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18551-440344/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18551-440344/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18551-440344/.minikube}
	I0401 10:27:55.533259  446597 ubuntu.go:177] setting up certificates
	I0401 10:27:55.533269  446597 provision.go:84] configureAuth start
	I0401 10:27:55.533332  446597 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-126557
	I0401 10:27:55.549320  446597 provision.go:143] copyHostCerts
	I0401 10:27:55.549415  446597 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18551-440344/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18551-440344/.minikube/ca.pem (1078 bytes)
	I0401 10:27:55.549551  446597 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18551-440344/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18551-440344/.minikube/cert.pem (1123 bytes)
	I0401 10:27:55.549611  446597 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18551-440344/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18551-440344/.minikube/key.pem (1679 bytes)
	I0401 10:27:55.549658  446597 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18551-440344/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18551-440344/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18551-440344/.minikube/certs/ca-key.pem org=jenkins.addons-126557 san=[127.0.0.1 192.168.49.2 addons-126557 localhost minikube]
	I0401 10:27:55.671162  446597 provision.go:177] copyRemoteCerts
	I0401 10:27:55.671231  446597 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 10:27:55.671273  446597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-126557
	I0401 10:27:55.687286  446597 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33167 SSHKeyPath:/home/jenkins/minikube-integration/18551-440344/.minikube/machines/addons-126557/id_rsa Username:docker}
	I0401 10:27:55.785849  446597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18551-440344/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0401 10:27:55.810455  446597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18551-440344/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0401 10:27:55.838405  446597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18551-440344/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0401 10:27:55.862891  446597 provision.go:87] duration metric: took 329.608367ms to configureAuth
	I0401 10:27:55.862919  446597 ubuntu.go:193] setting minikube options for container-runtime
	I0401 10:27:55.863141  446597 config.go:182] Loaded profile config "addons-126557": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0401 10:27:55.863155  446597 machine.go:97] duration metric: took 822.629069ms to provisionDockerMachine
	I0401 10:27:55.863163  446597 client.go:171] duration metric: took 9.598207493s to LocalClient.Create
	I0401 10:27:55.863188  446597 start.go:167] duration metric: took 9.598279967s to libmachine.API.Create "addons-126557"
	I0401 10:27:55.863201  446597 start.go:293] postStartSetup for "addons-126557" (driver="docker")
	I0401 10:27:55.863211  446597 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 10:27:55.863282  446597 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 10:27:55.863330  446597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-126557
	I0401 10:27:55.878839  446597 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33167 SSHKeyPath:/home/jenkins/minikube-integration/18551-440344/.minikube/machines/addons-126557/id_rsa Username:docker}
	I0401 10:27:55.978425  446597 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 10:27:55.981850  446597 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0401 10:27:55.981889  446597 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0401 10:27:55.981912  446597 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0401 10:27:55.981919  446597 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0401 10:27:55.981930  446597 filesync.go:126] Scanning /home/jenkins/minikube-integration/18551-440344/.minikube/addons for local assets ...
	I0401 10:27:55.982004  446597 filesync.go:126] Scanning /home/jenkins/minikube-integration/18551-440344/.minikube/files for local assets ...
	I0401 10:27:55.982031  446597 start.go:296] duration metric: took 118.823712ms for postStartSetup
	I0401 10:27:55.982353  446597 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-126557
	I0401 10:27:55.997976  446597 profile.go:143] Saving config to /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/addons-126557/config.json ...
	I0401 10:27:55.998290  446597 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 10:27:55.998342  446597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-126557
	I0401 10:27:56.022130  446597 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33167 SSHKeyPath:/home/jenkins/minikube-integration/18551-440344/.minikube/machines/addons-126557/id_rsa Username:docker}
	I0401 10:27:56.122122  446597 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0401 10:27:56.126485  446597 start.go:128] duration metric: took 9.86451335s to createHost
	I0401 10:27:56.126513  446597 start.go:83] releasing machines lock for "addons-126557", held for 9.864671344s
	I0401 10:27:56.126586  446597 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-126557
	I0401 10:27:56.141609  446597 ssh_runner.go:195] Run: cat /version.json
	I0401 10:27:56.141672  446597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-126557
	I0401 10:27:56.141941  446597 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 10:27:56.142008  446597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-126557
	I0401 10:27:56.162526  446597 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33167 SSHKeyPath:/home/jenkins/minikube-integration/18551-440344/.minikube/machines/addons-126557/id_rsa Username:docker}
	I0401 10:27:56.169381  446597 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33167 SSHKeyPath:/home/jenkins/minikube-integration/18551-440344/.minikube/machines/addons-126557/id_rsa Username:docker}
	I0401 10:27:56.256951  446597 ssh_runner.go:195] Run: systemctl --version
	I0401 10:27:56.371102  446597 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0401 10:27:56.375554  446597 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0401 10:27:56.401084  446597 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0401 10:27:56.401169  446597 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 10:27:56.431440  446597 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0401 10:27:56.431465  446597 start.go:494] detecting cgroup driver to use...
	I0401 10:27:56.431498  446597 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0401 10:27:56.431553  446597 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0401 10:27:56.444534  446597 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0401 10:27:56.456708  446597 docker.go:217] disabling cri-docker service (if available) ...
	I0401 10:27:56.456782  446597 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 10:27:56.471023  446597 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 10:27:56.485798  446597 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 10:27:56.575431  446597 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 10:27:56.669782  446597 docker.go:233] disabling docker service ...
	I0401 10:27:56.669883  446597 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 10:27:56.690892  446597 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 10:27:56.703522  446597 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 10:27:56.798160  446597 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 10:27:56.884462  446597 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 10:27:56.895796  446597 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 10:27:56.912762  446597 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0401 10:27:56.923643  446597 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0401 10:27:56.933981  446597 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0401 10:27:56.934097  446597 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0401 10:27:56.944636  446597 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0401 10:27:56.956348  446597 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0401 10:27:56.966673  446597 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0401 10:27:56.976702  446597 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 10:27:56.986978  446597 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0401 10:27:56.997704  446597 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0401 10:27:57.012537  446597 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0401 10:27:57.026034  446597 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 10:27:57.044334  446597 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 10:27:57.054035  446597 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 10:27:57.137652  446597 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0401 10:27:57.283776  446597 start.go:541] Will wait 60s for socket path /run/containerd/containerd.sock
	I0401 10:27:57.283929  446597 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0401 10:27:57.287823  446597 start.go:562] Will wait 60s for crictl version
	I0401 10:27:57.287940  446597 ssh_runner.go:195] Run: which crictl
	I0401 10:27:57.291612  446597 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 10:27:57.336941  446597 start.go:578] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.28
	RuntimeApiVersion:  v1
	I0401 10:27:57.337099  446597 ssh_runner.go:195] Run: containerd --version
	I0401 10:27:57.359301  446597 ssh_runner.go:195] Run: containerd --version
	I0401 10:27:57.384128  446597 out.go:177] * Preparing Kubernetes v1.29.3 on containerd 1.6.28 ...
	I0401 10:27:57.386370  446597 cli_runner.go:164] Run: docker network inspect addons-126557 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 10:27:57.399644  446597 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0401 10:27:57.403454  446597 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 10:27:57.414254  446597 kubeadm.go:877] updating cluster {Name:addons-126557 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-126557 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 10:27:57.414385  446597 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime containerd
	I0401 10:27:57.414451  446597 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 10:27:57.451866  446597 containerd.go:627] all images are preloaded for containerd runtime.
	I0401 10:27:57.451894  446597 containerd.go:534] Images already preloaded, skipping extraction
	I0401 10:27:57.451958  446597 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 10:27:57.489725  446597 containerd.go:627] all images are preloaded for containerd runtime.
	I0401 10:27:57.489751  446597 cache_images.go:84] Images are preloaded, skipping loading
	I0401 10:27:57.489759  446597 kubeadm.go:928] updating node { 192.168.49.2 8443 v1.29.3 containerd true true} ...
	I0401 10:27:57.489865  446597 kubeadm.go:940] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-126557 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:addons-126557 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 10:27:57.489937  446597 ssh_runner.go:195] Run: sudo crictl info
	I0401 10:27:57.530727  446597 cni.go:84] Creating CNI manager for ""
	I0401 10:27:57.530754  446597 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0401 10:27:57.530763  446597 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 10:27:57.530786  446597 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-126557 NodeName:addons-126557 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 10:27:57.530930  446597 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-126557"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 10:27:57.531005  446597 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0401 10:27:57.540156  446597 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 10:27:57.540250  446597 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 10:27:57.549261  446597 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0401 10:27:57.568348  446597 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 10:27:57.587040  446597 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I0401 10:27:57.605295  446597 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0401 10:27:57.608754  446597 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 10:27:57.619832  446597 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 10:27:57.700503  446597 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 10:27:57.714376  446597 certs.go:68] Setting up /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/addons-126557 for IP: 192.168.49.2
	I0401 10:27:57.714401  446597 certs.go:194] generating shared ca certs ...
	I0401 10:27:57.714422  446597 certs.go:226] acquiring lock for ca certs: {Name:mkcd78655f97da7a9cc32a54b546078a42807779 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 10:27:57.714555  446597 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18551-440344/.minikube/ca.key
	I0401 10:27:57.871571  446597 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18551-440344/.minikube/ca.crt ...
	I0401 10:27:57.871601  446597 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18551-440344/.minikube/ca.crt: {Name:mk1b0fbb849f164d53a3066b44bb35ebc971cd1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 10:27:57.871807  446597 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18551-440344/.minikube/ca.key ...
	I0401 10:27:57.871820  446597 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18551-440344/.minikube/ca.key: {Name:mk13cbf91c6ab8daedb9baa455ffbc0da411f085 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 10:27:57.871926  446597 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18551-440344/.minikube/proxy-client-ca.key
	I0401 10:27:58.298263  446597 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18551-440344/.minikube/proxy-client-ca.crt ...
	I0401 10:27:58.298296  446597 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18551-440344/.minikube/proxy-client-ca.crt: {Name:mk4f17d05b7464df607639898b960e29c53a2de8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 10:27:58.298486  446597 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18551-440344/.minikube/proxy-client-ca.key ...
	I0401 10:27:58.298497  446597 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18551-440344/.minikube/proxy-client-ca.key: {Name:mkfd20615bde669bc2e132c148c5874590b8f937 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 10:27:58.299118  446597 certs.go:256] generating profile certs ...
	I0401 10:27:58.299208  446597 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/addons-126557/client.key
	I0401 10:27:58.299233  446597 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/addons-126557/client.crt with IP's: []
	I0401 10:27:58.609630  446597 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/addons-126557/client.crt ...
	I0401 10:27:58.609663  446597 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/addons-126557/client.crt: {Name:mk239157f4b0bcd48d03dd376469dbfcaa338c8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 10:27:58.609897  446597 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/addons-126557/client.key ...
	I0401 10:27:58.609912  446597 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/addons-126557/client.key: {Name:mk79b8b86b6c53e30cef7603a5580fa548692b85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 10:27:58.610011  446597 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/addons-126557/apiserver.key.ed48e385
	I0401 10:27:58.610034  446597 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/addons-126557/apiserver.crt.ed48e385 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0401 10:27:58.842347  446597 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/addons-126557/apiserver.crt.ed48e385 ...
	I0401 10:27:58.842383  446597 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/addons-126557/apiserver.crt.ed48e385: {Name:mk08a418addff9df58a3659cb71da7e2d257e5ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 10:27:58.842612  446597 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/addons-126557/apiserver.key.ed48e385 ...
	I0401 10:27:58.842631  446597 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/addons-126557/apiserver.key.ed48e385: {Name:mk7eb2b3c173f2235f650ed19fecdcbdb28ffa10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 10:27:58.842739  446597 certs.go:381] copying /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/addons-126557/apiserver.crt.ed48e385 -> /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/addons-126557/apiserver.crt
	I0401 10:27:58.842869  446597 certs.go:385] copying /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/addons-126557/apiserver.key.ed48e385 -> /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/addons-126557/apiserver.key
	I0401 10:27:58.842932  446597 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/addons-126557/proxy-client.key
	I0401 10:27:58.842957  446597 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/addons-126557/proxy-client.crt with IP's: []
	I0401 10:27:59.313275  446597 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/addons-126557/proxy-client.crt ...
	I0401 10:27:59.313309  446597 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/addons-126557/proxy-client.crt: {Name:mk325c23e1e6dcde66319805d04d6d23da31b31a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 10:27:59.313528  446597 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/addons-126557/proxy-client.key ...
	I0401 10:27:59.313545  446597 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/addons-126557/proxy-client.key: {Name:mk62ff22bf65b1547d27f56375768467f75b67b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 10:27:59.313750  446597 certs.go:484] found cert: /home/jenkins/minikube-integration/18551-440344/.minikube/certs/ca-key.pem (1675 bytes)
	I0401 10:27:59.313793  446597 certs.go:484] found cert: /home/jenkins/minikube-integration/18551-440344/.minikube/certs/ca.pem (1078 bytes)
	I0401 10:27:59.313824  446597 certs.go:484] found cert: /home/jenkins/minikube-integration/18551-440344/.minikube/certs/cert.pem (1123 bytes)
	I0401 10:27:59.313852  446597 certs.go:484] found cert: /home/jenkins/minikube-integration/18551-440344/.minikube/certs/key.pem (1679 bytes)
	I0401 10:27:59.314493  446597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18551-440344/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 10:27:59.340454  446597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18551-440344/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 10:27:59.365435  446597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18551-440344/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 10:27:59.389657  446597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18551-440344/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0401 10:27:59.414830  446597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/addons-126557/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0401 10:27:59.439483  446597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/addons-126557/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 10:27:59.464361  446597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/addons-126557/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 10:27:59.489149  446597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/addons-126557/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0401 10:27:59.514776  446597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18551-440344/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 10:27:59.540531  446597 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0401 10:27:59.559778  446597 ssh_runner.go:195] Run: openssl version
	I0401 10:27:59.565668  446597 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 10:27:59.575416  446597 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 10:27:59.578999  446597 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 10:27 /usr/share/ca-certificates/minikubeCA.pem
	I0401 10:27:59.579069  446597 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 10:27:59.586157  446597 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 10:27:59.595614  446597 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 10:27:59.599121  446597 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0401 10:27:59.599173  446597 kubeadm.go:391] StartCluster: {Name:addons-126557 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-126557 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 10:27:59.599264  446597 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0401 10:27:59.599324  446597 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 10:27:59.640405  446597 cri.go:89] found id: ""
	I0401 10:27:59.640485  446597 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 10:27:59.649877  446597 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 10:27:59.659258  446597 kubeadm.go:213] ignoring SystemVerification for kubeadm because of docker driver
	I0401 10:27:59.659345  446597 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 10:27:59.668328  446597 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 10:27:59.668349  446597 kubeadm.go:156] found existing configuration files:
	
	I0401 10:27:59.668413  446597 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 10:27:59.677292  446597 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 10:27:59.677361  446597 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 10:27:59.686151  446597 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 10:27:59.695708  446597 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 10:27:59.695780  446597 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 10:27:59.704117  446597 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 10:27:59.712729  446597 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 10:27:59.712821  446597 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 10:27:59.721538  446597 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 10:27:59.730877  446597 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 10:27:59.730964  446597 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 10:27:59.739319  446597 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0401 10:27:59.785363  446597 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0401 10:27:59.785671  446597 kubeadm.go:309] [preflight] Running pre-flight checks
	I0401 10:27:59.826477  446597 kubeadm.go:309] [preflight] The system verification failed. Printing the output from the verification:
	I0401 10:27:59.826559  446597 kubeadm.go:309] KERNEL_VERSION: 5.15.0-1056-aws
	I0401 10:27:59.826599  446597 kubeadm.go:309] OS: Linux
	I0401 10:27:59.826647  446597 kubeadm.go:309] CGROUPS_CPU: enabled
	I0401 10:27:59.826698  446597 kubeadm.go:309] CGROUPS_CPUACCT: enabled
	I0401 10:27:59.826747  446597 kubeadm.go:309] CGROUPS_CPUSET: enabled
	I0401 10:27:59.826798  446597 kubeadm.go:309] CGROUPS_DEVICES: enabled
	I0401 10:27:59.826848  446597 kubeadm.go:309] CGROUPS_FREEZER: enabled
	I0401 10:27:59.826898  446597 kubeadm.go:309] CGROUPS_MEMORY: enabled
	I0401 10:27:59.826944  446597 kubeadm.go:309] CGROUPS_PIDS: enabled
	I0401 10:27:59.826995  446597 kubeadm.go:309] CGROUPS_HUGETLB: enabled
	I0401 10:27:59.827043  446597 kubeadm.go:309] CGROUPS_BLKIO: enabled
	I0401 10:27:59.917424  446597 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 10:27:59.917536  446597 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 10:27:59.917633  446597 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0401 10:28:00.540397  446597 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 10:28:00.544058  446597 out.go:204]   - Generating certificates and keys ...
	I0401 10:28:00.544177  446597 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0401 10:28:00.544248  446597 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0401 10:28:00.861935  446597 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0401 10:28:01.344519  446597 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0401 10:28:02.442844  446597 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0401 10:28:02.743814  446597 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0401 10:28:03.414094  446597 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0401 10:28:03.414237  446597 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-126557 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0401 10:28:04.302094  446597 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0401 10:28:04.302368  446597 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-126557 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0401 10:28:04.564321  446597 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0401 10:28:05.041705  446597 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0401 10:28:05.248413  446597 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0401 10:28:05.248749  446597 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 10:28:05.600396  446597 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 10:28:05.992934  446597 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0401 10:28:06.998976  446597 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 10:28:07.154683  446597 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 10:28:07.621258  446597 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 10:28:07.622067  446597 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 10:28:07.626862  446597 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 10:28:07.629323  446597 out.go:204]   - Booting up control plane ...
	I0401 10:28:07.629431  446597 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 10:28:07.629550  446597 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 10:28:07.629620  446597 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 10:28:07.640206  446597 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 10:28:07.641182  446597 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 10:28:07.641404  446597 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0401 10:28:07.743187  446597 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0401 10:28:14.745933  446597 kubeadm.go:309] [apiclient] All control plane components are healthy after 7.002320 seconds
	I0401 10:28:14.768038  446597 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0401 10:28:14.780539  446597 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0401 10:28:15.307755  446597 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0401 10:28:15.307959  446597 kubeadm.go:309] [mark-control-plane] Marking the node addons-126557 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0401 10:28:15.819825  446597 kubeadm.go:309] [bootstrap-token] Using token: e8o5os.bdo150iljjgr9zy1
	I0401 10:28:15.821725  446597 out.go:204]   - Configuring RBAC rules ...
	I0401 10:28:15.821845  446597 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0401 10:28:15.827321  446597 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0401 10:28:15.838954  446597 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0401 10:28:15.842792  446597 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0401 10:28:15.846373  446597 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0401 10:28:15.850157  446597 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0401 10:28:15.864041  446597 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0401 10:28:16.089896  446597 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0401 10:28:16.232722  446597 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0401 10:28:16.234414  446597 kubeadm.go:309] 
	I0401 10:28:16.234486  446597 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0401 10:28:16.234492  446597 kubeadm.go:309] 
	I0401 10:28:16.234566  446597 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0401 10:28:16.234571  446597 kubeadm.go:309] 
	I0401 10:28:16.234596  446597 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0401 10:28:16.235025  446597 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0401 10:28:16.235080  446597 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0401 10:28:16.235085  446597 kubeadm.go:309] 
	I0401 10:28:16.235137  446597 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0401 10:28:16.235142  446597 kubeadm.go:309] 
	I0401 10:28:16.235192  446597 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0401 10:28:16.235197  446597 kubeadm.go:309] 
	I0401 10:28:16.235247  446597 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0401 10:28:16.235319  446597 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0401 10:28:16.235385  446597 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0401 10:28:16.235389  446597 kubeadm.go:309] 
	I0401 10:28:16.235662  446597 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0401 10:28:16.235754  446597 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0401 10:28:16.235760  446597 kubeadm.go:309] 
	I0401 10:28:16.236047  446597 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token e8o5os.bdo150iljjgr9zy1 \
	I0401 10:28:16.236152  446597 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:cbcf2f82093f41bf7a7754ef692c4c973c22ca55ec8f76b73ea2379c31d5d51a \
	I0401 10:28:16.236349  446597 kubeadm.go:309] 	--control-plane 
	I0401 10:28:16.236359  446597 kubeadm.go:309] 
	I0401 10:28:16.236666  446597 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0401 10:28:16.236676  446597 kubeadm.go:309] 
	I0401 10:28:16.236961  446597 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token e8o5os.bdo150iljjgr9zy1 \
	I0401 10:28:16.237400  446597 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:cbcf2f82093f41bf7a7754ef692c4c973c22ca55ec8f76b73ea2379c31d5d51a 
	I0401 10:28:16.240631  446597 kubeadm.go:309] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1056-aws\n", err: exit status 1
	I0401 10:28:16.240824  446597 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 10:28:16.240862  446597 cni.go:84] Creating CNI manager for ""
	I0401 10:28:16.240883  446597 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0401 10:28:16.244544  446597 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0401 10:28:16.247096  446597 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0401 10:28:16.251851  446597 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.29.3/kubectl ...
	I0401 10:28:16.251870  446597 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0401 10:28:16.285141  446597 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0401 10:28:16.632561  446597 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0401 10:28:16.632707  446597 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 10:28:16.632786  446597 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-126557 minikube.k8s.io/updated_at=2024_04_01T10_28_16_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=b8aa0d860b7e6047018bc1a9124397cd2c931e0d minikube.k8s.io/name=addons-126557 minikube.k8s.io/primary=true
	I0401 10:28:16.775728  446597 ops.go:34] apiserver oom_adj: -16
	I0401 10:28:16.775817  446597 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 10:28:17.276211  446597 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 10:28:17.776371  446597 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 10:28:18.276685  446597 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 10:28:18.776580  446597 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 10:28:19.275909  446597 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 10:28:19.776210  446597 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 10:28:20.276094  446597 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 10:28:20.776932  446597 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 10:28:21.276842  446597 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 10:28:21.776499  446597 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 10:28:22.276745  446597 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 10:28:22.776844  446597 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 10:28:23.276571  446597 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 10:28:23.776869  446597 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 10:28:24.276504  446597 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 10:28:24.775940  446597 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 10:28:25.276064  446597 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 10:28:25.776618  446597 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 10:28:26.276335  446597 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 10:28:26.776277  446597 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 10:28:27.276706  446597 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 10:28:27.776609  446597 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 10:28:28.276584  446597 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 10:28:28.775961  446597 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 10:28:29.275960  446597 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 10:28:29.451397  446597 kubeadm.go:1107] duration metric: took 12.818736929s to wait for elevateKubeSystemPrivileges
	W0401 10:28:29.451435  446597 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0401 10:28:29.451443  446597 kubeadm.go:393] duration metric: took 29.85227568s to StartCluster
	I0401 10:28:29.451459  446597 settings.go:142] acquiring lock: {Name:mk276d29ae3bc72f373a524094e329002a16d918 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 10:28:29.451583  446597 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18551-440344/kubeconfig
	I0401 10:28:29.451962  446597 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18551-440344/kubeconfig: {Name:mka3c2a4390d3645e6f38c74c25892daa576bd87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 10:28:29.452501  446597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0401 10:28:29.452532  446597 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0401 10:28:29.454788  446597 out.go:177] * Verifying Kubernetes components...
	I0401 10:28:29.452766  446597 config.go:182] Loaded profile config "addons-126557": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0401 10:28:29.452777  446597 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0401 10:28:29.456986  446597 addons.go:69] Setting yakd=true in profile "addons-126557"
	I0401 10:28:29.457015  446597 addons.go:234] Setting addon yakd=true in "addons-126557"
	I0401 10:28:29.457073  446597 host.go:66] Checking if "addons-126557" exists ...
	I0401 10:28:29.457567  446597 cli_runner.go:164] Run: docker container inspect addons-126557 --format={{.State.Status}}
	I0401 10:28:29.457717  446597 addons.go:69] Setting ingress-dns=true in profile "addons-126557"
	I0401 10:28:29.457746  446597 addons.go:234] Setting addon ingress-dns=true in "addons-126557"
	I0401 10:28:29.457781  446597 host.go:66] Checking if "addons-126557" exists ...
	I0401 10:28:29.458148  446597 cli_runner.go:164] Run: docker container inspect addons-126557 --format={{.State.Status}}
	I0401 10:28:29.458565  446597 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 10:28:29.458706  446597 addons.go:69] Setting inspektor-gadget=true in profile "addons-126557"
	I0401 10:28:29.458731  446597 addons.go:234] Setting addon inspektor-gadget=true in "addons-126557"
	I0401 10:28:29.458756  446597 host.go:66] Checking if "addons-126557" exists ...
	I0401 10:28:29.458789  446597 addons.go:69] Setting cloud-spanner=true in profile "addons-126557"
	I0401 10:28:29.458810  446597 addons.go:234] Setting addon cloud-spanner=true in "addons-126557"
	I0401 10:28:29.458830  446597 host.go:66] Checking if "addons-126557" exists ...
	I0401 10:28:29.459136  446597 cli_runner.go:164] Run: docker container inspect addons-126557 --format={{.State.Status}}
	I0401 10:28:29.459193  446597 cli_runner.go:164] Run: docker container inspect addons-126557 --format={{.State.Status}}
	I0401 10:28:29.471770  446597 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-126557"
	I0401 10:28:29.471856  446597 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-126557"
	I0401 10:28:29.471890  446597 host.go:66] Checking if "addons-126557" exists ...
	I0401 10:28:29.472320  446597 cli_runner.go:164] Run: docker container inspect addons-126557 --format={{.State.Status}}
	I0401 10:28:29.472457  446597 addons.go:69] Setting metrics-server=true in profile "addons-126557"
	I0401 10:28:29.472473  446597 addons.go:234] Setting addon metrics-server=true in "addons-126557"
	I0401 10:28:29.472494  446597 host.go:66] Checking if "addons-126557" exists ...
	I0401 10:28:29.472841  446597 cli_runner.go:164] Run: docker container inspect addons-126557 --format={{.State.Status}}
	I0401 10:28:29.481848  446597 addons.go:69] Setting default-storageclass=true in profile "addons-126557"
	I0401 10:28:29.481931  446597 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-126557"
	I0401 10:28:29.482238  446597 cli_runner.go:164] Run: docker container inspect addons-126557 --format={{.State.Status}}
	I0401 10:28:29.482563  446597 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-126557"
	I0401 10:28:29.482596  446597 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-126557"
	I0401 10:28:29.482631  446597 host.go:66] Checking if "addons-126557" exists ...
	I0401 10:28:29.483010  446597 cli_runner.go:164] Run: docker container inspect addons-126557 --format={{.State.Status}}
	I0401 10:28:29.489582  446597 addons.go:69] Setting registry=true in profile "addons-126557"
	I0401 10:28:29.489628  446597 addons.go:234] Setting addon registry=true in "addons-126557"
	I0401 10:28:29.489666  446597 host.go:66] Checking if "addons-126557" exists ...
	I0401 10:28:29.490103  446597 cli_runner.go:164] Run: docker container inspect addons-126557 --format={{.State.Status}}
	I0401 10:28:29.492450  446597 addons.go:69] Setting gcp-auth=true in profile "addons-126557"
	I0401 10:28:29.492495  446597 mustload.go:65] Loading cluster: addons-126557
	I0401 10:28:29.496431  446597 addons.go:69] Setting ingress=true in profile "addons-126557"
	I0401 10:28:29.496523  446597 addons.go:234] Setting addon ingress=true in "addons-126557"
	I0401 10:28:29.496617  446597 host.go:66] Checking if "addons-126557" exists ...
	I0401 10:28:29.517026  446597 addons.go:69] Setting storage-provisioner=true in profile "addons-126557"
	I0401 10:28:29.517224  446597 addons.go:234] Setting addon storage-provisioner=true in "addons-126557"
	I0401 10:28:29.518182  446597 host.go:66] Checking if "addons-126557" exists ...
	I0401 10:28:29.518764  446597 cli_runner.go:164] Run: docker container inspect addons-126557 --format={{.State.Status}}
	I0401 10:28:29.541697  446597 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-126557"
	I0401 10:28:29.541795  446597 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-126557"
	I0401 10:28:29.542199  446597 cli_runner.go:164] Run: docker container inspect addons-126557 --format={{.State.Status}}
	I0401 10:28:29.563122  446597 addons.go:69] Setting volumesnapshots=true in profile "addons-126557"
	I0401 10:28:29.563220  446597 addons.go:234] Setting addon volumesnapshots=true in "addons-126557"
	I0401 10:28:29.563291  446597 host.go:66] Checking if "addons-126557" exists ...
	I0401 10:28:29.563926  446597 config.go:182] Loaded profile config "addons-126557": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0401 10:28:29.564234  446597 cli_runner.go:164] Run: docker container inspect addons-126557 --format={{.State.Status}}
	I0401 10:28:29.573266  446597 cli_runner.go:164] Run: docker container inspect addons-126557 --format={{.State.Status}}
	I0401 10:28:29.583044  446597 cli_runner.go:164] Run: docker container inspect addons-126557 --format={{.State.Status}}
	I0401 10:28:29.602997  446597 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0401 10:28:29.607897  446597 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0401 10:28:29.607925  446597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0401 10:28:29.607993  446597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-126557
	I0401 10:28:29.637344  446597 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0401 10:28:29.651894  446597 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0401 10:28:29.654426  446597 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.26.0
	I0401 10:28:29.659997  446597 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0401 10:28:29.660073  446597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0401 10:28:29.660172  446597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-126557
	I0401 10:28:29.666147  446597 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0401 10:28:29.654687  446597 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.15
	I0401 10:28:29.654691  446597 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0401 10:28:29.654697  446597 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0401 10:28:29.684740  446597 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.5
	I0401 10:28:29.676410  446597 addons.go:234] Setting addon default-storageclass=true in "addons-126557"
	I0401 10:28:29.688785  446597 host.go:66] Checking if "addons-126557" exists ...
	I0401 10:28:29.689367  446597 cli_runner.go:164] Run: docker container inspect addons-126557 --format={{.State.Status}}
	I0401 10:28:29.697793  446597 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0401 10:28:29.697858  446597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0401 10:28:29.697956  446597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-126557
	I0401 10:28:29.702985  446597 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0401 10:28:29.703010  446597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0401 10:28:29.703078  446597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-126557
	I0401 10:28:29.719143  446597 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0401 10:28:29.719171  446597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0401 10:28:29.719235  446597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-126557
	I0401 10:28:29.719393  446597 out.go:177]   - Using image docker.io/registry:2.8.3
	I0401 10:28:29.725424  446597 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0401 10:28:29.720009  446597 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0401 10:28:29.721337  446597 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-126557"
	I0401 10:28:29.721416  446597 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0401 10:28:29.721423  446597 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0401 10:28:29.728856  446597 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0401 10:28:29.738072  446597 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0401 10:28:29.736010  446597 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 10:28:29.736054  446597 host.go:66] Checking if "addons-126557" exists ...
	I0401 10:28:29.736073  446597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0401 10:28:29.736088  446597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0401 10:28:29.740509  446597 cli_runner.go:164] Run: docker container inspect addons-126557 --format={{.State.Status}}
	I0401 10:28:29.740541  446597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-126557
	I0401 10:28:29.740572  446597 host.go:66] Checking if "addons-126557" exists ...
	I0401 10:28:29.746735  446597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-126557
	I0401 10:28:29.747956  446597 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33167 SSHKeyPath:/home/jenkins/minikube-integration/18551-440344/.minikube/machines/addons-126557/id_rsa Username:docker}
	I0401 10:28:29.757197  446597 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0401 10:28:29.762779  446597 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0401 10:28:29.762801  446597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0401 10:28:29.762859  446597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-126557
	I0401 10:28:29.757313  446597 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 10:28:29.757533  446597 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0401 10:28:29.780655  446597 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0401 10:28:29.784798  446597 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0401 10:28:29.788796  446597 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0401 10:28:29.785106  446597 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0401 10:28:29.785121  446597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 10:28:29.791073  446597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-126557
	I0401 10:28:29.799827  446597 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0401 10:28:29.800049  446597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0401 10:28:29.810986  446597 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0401 10:28:29.811009  446597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0401 10:28:29.811075  446597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-126557
	I0401 10:28:29.808811  446597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-126557
	I0401 10:28:29.838801  446597 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33167 SSHKeyPath:/home/jenkins/minikube-integration/18551-440344/.minikube/machines/addons-126557/id_rsa Username:docker}
	I0401 10:28:29.848276  446597 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33167 SSHKeyPath:/home/jenkins/minikube-integration/18551-440344/.minikube/machines/addons-126557/id_rsa Username:docker}
	I0401 10:28:29.868327  446597 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33167 SSHKeyPath:/home/jenkins/minikube-integration/18551-440344/.minikube/machines/addons-126557/id_rsa Username:docker}
	I0401 10:28:29.871935  446597 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 10:28:29.871958  446597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 10:28:29.872034  446597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-126557
	I0401 10:28:29.905504  446597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0401 10:28:29.905737  446597 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 10:28:29.928192  446597 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0401 10:28:29.930729  446597 out.go:177]   - Using image docker.io/busybox:stable
	I0401 10:28:29.933746  446597 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0401 10:28:29.933768  446597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0401 10:28:29.932314  446597 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33167 SSHKeyPath:/home/jenkins/minikube-integration/18551-440344/.minikube/machines/addons-126557/id_rsa Username:docker}
	I0401 10:28:29.938519  446597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-126557
	I0401 10:28:29.958092  446597 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33167 SSHKeyPath:/home/jenkins/minikube-integration/18551-440344/.minikube/machines/addons-126557/id_rsa Username:docker}
	I0401 10:28:29.965604  446597 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33167 SSHKeyPath:/home/jenkins/minikube-integration/18551-440344/.minikube/machines/addons-126557/id_rsa Username:docker}
	I0401 10:28:30.003452  446597 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33167 SSHKeyPath:/home/jenkins/minikube-integration/18551-440344/.minikube/machines/addons-126557/id_rsa Username:docker}
	I0401 10:28:30.026066  446597 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33167 SSHKeyPath:/home/jenkins/minikube-integration/18551-440344/.minikube/machines/addons-126557/id_rsa Username:docker}
	I0401 10:28:30.028801  446597 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33167 SSHKeyPath:/home/jenkins/minikube-integration/18551-440344/.minikube/machines/addons-126557/id_rsa Username:docker}
	I0401 10:28:30.030742  446597 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33167 SSHKeyPath:/home/jenkins/minikube-integration/18551-440344/.minikube/machines/addons-126557/id_rsa Username:docker}
	I0401 10:28:30.040952  446597 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33167 SSHKeyPath:/home/jenkins/minikube-integration/18551-440344/.minikube/machines/addons-126557/id_rsa Username:docker}
	I0401 10:28:30.042068  446597 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33167 SSHKeyPath:/home/jenkins/minikube-integration/18551-440344/.minikube/machines/addons-126557/id_rsa Username:docker}
	W0401 10:28:30.047426  446597 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0401 10:28:30.047460  446597 retry.go:31] will retry after 258.936492ms: ssh: handshake failed: EOF
	W0401 10:28:30.054816  446597 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0401 10:28:30.054850  446597 retry.go:31] will retry after 218.510317ms: ssh: handshake failed: EOF
	I0401 10:28:30.245463  446597 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0401 10:28:30.504913  446597 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0401 10:28:30.504984  446597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0401 10:28:30.560143  446597 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0401 10:28:30.560213  446597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0401 10:28:30.673979  446597 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0401 10:28:30.674040  446597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0401 10:28:30.711990  446597 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0401 10:28:30.712068  446597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0401 10:28:30.729749  446597 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0401 10:28:30.729778  446597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0401 10:28:30.749950  446597 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0401 10:28:30.750024  446597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0401 10:28:30.771355  446597 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0401 10:28:30.771423  446597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0401 10:28:30.806230  446597 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0401 10:28:30.844485  446597 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 10:28:30.853478  446597 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0401 10:28:30.893887  446597 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0401 10:28:30.895815  446597 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0401 10:28:30.895880  446597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0401 10:28:30.897892  446597 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0401 10:28:30.897951  446597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0401 10:28:30.947024  446597 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0401 10:28:30.970523  446597 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0401 10:28:30.970591  446597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0401 10:28:30.973418  446597 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0401 10:28:31.106568  446597 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 10:28:31.108574  446597 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0401 10:28:31.108652  446597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0401 10:28:31.118833  446597 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0401 10:28:31.118914  446597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0401 10:28:31.156438  446597 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0401 10:28:31.156519  446597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0401 10:28:31.202966  446597 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0401 10:28:31.203036  446597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0401 10:28:31.222936  446597 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 10:28:31.223009  446597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0401 10:28:31.288545  446597 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0401 10:28:31.288619  446597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0401 10:28:31.326802  446597 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0401 10:28:31.326874  446597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0401 10:28:31.360223  446597 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0401 10:28:31.361285  446597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0401 10:28:31.391131  446597 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 10:28:31.500346  446597 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0401 10:28:31.500370  446597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0401 10:28:31.524157  446597 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0401 10:28:31.524177  446597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0401 10:28:31.726167  446597 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0401 10:28:31.726243  446597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0401 10:28:31.816224  446597 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0401 10:28:31.816294  446597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0401 10:28:31.926059  446597 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0401 10:28:31.926133  446597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0401 10:28:31.951368  446597 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0401 10:28:31.985875  446597 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0401 10:28:32.082204  446597 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0401 10:28:32.082278  446597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0401 10:28:32.189353  446597 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0401 10:28:32.189426  446597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0401 10:28:32.382599  446597 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0401 10:28:32.382673  446597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0401 10:28:32.417698  446597 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.511903179s)
	I0401 10:28:32.418679  446597 node_ready.go:35] waiting up to 6m0s for node "addons-126557" to be "Ready" ...
	I0401 10:28:32.418916  446597 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.513333674s)
	I0401 10:28:32.418958  446597 start.go:946] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0401 10:28:32.452553  446597 node_ready.go:49] node "addons-126557" has status "Ready":"True"
	I0401 10:28:32.452628  446597 node_ready.go:38] duration metric: took 33.899236ms for node "addons-126557" to be "Ready" ...
	I0401 10:28:32.452656  446597 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 10:28:32.495039  446597 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-5npps" in "kube-system" namespace to be "Ready" ...
	I0401 10:28:32.551591  446597 pod_ready.go:92] pod "coredns-76f75df574-5npps" in "kube-system" namespace has status "Ready":"True"
	I0401 10:28:32.551663  446597 pod_ready.go:81] duration metric: took 56.540075ms for pod "coredns-76f75df574-5npps" in "kube-system" namespace to be "Ready" ...
	I0401 10:28:32.551691  446597 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-vkcwj" in "kube-system" namespace to be "Ready" ...
	I0401 10:28:32.677582  446597 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0401 10:28:32.677654  446597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0401 10:28:32.705344  446597 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0401 10:28:32.705416  446597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0401 10:28:32.772902  446597 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0401 10:28:32.871234  446597 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0401 10:28:32.871261  446597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0401 10:28:32.923191  446597 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-126557" context rescaled to 1 replicas
	I0401 10:28:33.059945  446597 pod_ready.go:92] pod "coredns-76f75df574-vkcwj" in "kube-system" namespace has status "Ready":"True"
	I0401 10:28:33.059974  446597 pod_ready.go:81] duration metric: took 508.261384ms for pod "coredns-76f75df574-vkcwj" in "kube-system" namespace to be "Ready" ...
	I0401 10:28:33.059988  446597 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-126557" in "kube-system" namespace to be "Ready" ...
	I0401 10:28:33.066873  446597 pod_ready.go:92] pod "etcd-addons-126557" in "kube-system" namespace has status "Ready":"True"
	I0401 10:28:33.066903  446597 pod_ready.go:81] duration metric: took 6.90623ms for pod "etcd-addons-126557" in "kube-system" namespace to be "Ready" ...
	I0401 10:28:33.066949  446597 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-126557" in "kube-system" namespace to be "Ready" ...
	I0401 10:28:33.074393  446597 pod_ready.go:92] pod "kube-apiserver-addons-126557" in "kube-system" namespace has status "Ready":"True"
	I0401 10:28:33.074428  446597 pod_ready.go:81] duration metric: took 7.46135ms for pod "kube-apiserver-addons-126557" in "kube-system" namespace to be "Ready" ...
	I0401 10:28:33.074446  446597 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-126557" in "kube-system" namespace to be "Ready" ...
	I0401 10:28:33.080730  446597 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0401 10:28:33.080756  446597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0401 10:28:33.231621  446597 pod_ready.go:92] pod "kube-controller-manager-addons-126557" in "kube-system" namespace has status "Ready":"True"
	I0401 10:28:33.231651  446597 pod_ready.go:81] duration metric: took 157.144715ms for pod "kube-controller-manager-addons-126557" in "kube-system" namespace to be "Ready" ...
	I0401 10:28:33.231696  446597 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7vv7n" in "kube-system" namespace to be "Ready" ...
	I0401 10:28:33.341688  446597 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0401 10:28:33.341714  446597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0401 10:28:33.623126  446597 pod_ready.go:92] pod "kube-proxy-7vv7n" in "kube-system" namespace has status "Ready":"True"
	I0401 10:28:33.623154  446597 pod_ready.go:81] duration metric: took 391.441481ms for pod "kube-proxy-7vv7n" in "kube-system" namespace to be "Ready" ...
	I0401 10:28:33.623167  446597 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-126557" in "kube-system" namespace to be "Ready" ...
	I0401 10:28:33.665167  446597 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0401 10:28:33.790485  446597 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.544949689s)
	I0401 10:28:34.049902  446597 pod_ready.go:92] pod "kube-scheduler-addons-126557" in "kube-system" namespace has status "Ready":"True"
	I0401 10:28:34.049933  446597 pod_ready.go:81] duration metric: took 426.724084ms for pod "kube-scheduler-addons-126557" in "kube-system" namespace to be "Ready" ...
	I0401 10:28:34.049980  446597 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-42fcm" in "kube-system" namespace to be "Ready" ...
	I0401 10:28:36.064162  446597 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-42fcm" in "kube-system" namespace has status "Ready":"False"
	I0401 10:28:36.799589  446597 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0401 10:28:36.799694  446597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-126557
	I0401 10:28:36.825447  446597 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33167 SSHKeyPath:/home/jenkins/minikube-integration/18551-440344/.minikube/machines/addons-126557/id_rsa Username:docker}
	I0401 10:28:37.211428  446597 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0401 10:28:37.246330  446597 addons.go:234] Setting addon gcp-auth=true in "addons-126557"
	I0401 10:28:37.246431  446597 host.go:66] Checking if "addons-126557" exists ...
	I0401 10:28:37.246956  446597 cli_runner.go:164] Run: docker container inspect addons-126557 --format={{.State.Status}}
	I0401 10:28:37.271924  446597 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0401 10:28:37.271977  446597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-126557
	I0401 10:28:37.290055  446597 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33167 SSHKeyPath:/home/jenkins/minikube-integration/18551-440344/.minikube/machines/addons-126557/id_rsa Username:docker}
	I0401 10:28:38.004385  446597 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.198071483s)
	I0401 10:28:38.004426  446597 addons.go:470] Verifying addon ingress=true in "addons-126557"
	I0401 10:28:38.007140  446597 out.go:177] * Verifying ingress addon...
	I0401 10:28:38.004672  446597 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.160117572s)
	I0401 10:28:38.004694  446597 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.151164494s)
	I0401 10:28:38.004830  446597 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.110877711s)
	I0401 10:28:38.004859  446597 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.057768893s)
	I0401 10:28:38.007482  446597 addons.go:470] Verifying addon registry=true in "addons-126557"
	I0401 10:28:38.009727  446597 out.go:177] * Verifying registry addon...
	I0401 10:28:38.004896  446597 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.89825892s)
	I0401 10:28:38.005017  446597 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.05357571s)
	I0401 10:28:38.005162  446597 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.019210393s)
	I0401 10:28:38.005240  446597 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.232312666s)
	I0401 10:28:38.005283  446597 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.614128791s)
	I0401 10:28:38.004880  446597 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.03140746s)
	W0401 10:28:38.012729  446597 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0401 10:28:38.012763  446597 retry.go:31] will retry after 235.866078ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0401 10:28:38.012845  446597 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0401 10:28:38.015835  446597 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-126557 service yakd-dashboard -n yakd-dashboard
	
	I0401 10:28:38.013020  446597 addons.go:470] Verifying addon metrics-server=true in "addons-126557"
	I0401 10:28:38.013234  446597 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0401 10:28:38.025607  446597 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0401 10:28:38.025636  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0401 10:28:38.033791  446597 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0401 10:28:38.036581  446597 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0401 10:28:38.036605  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:28:38.249197  446597 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0401 10:28:38.518299  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:38.523801  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:28:38.557224  446597 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-42fcm" in "kube-system" namespace has status "Ready":"False"
	I0401 10:28:39.023450  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:39.027653  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:28:39.332731  446597 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.060756712s)
	I0401 10:28:39.335367  446597 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0401 10:28:39.334275  446597 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.669044298s)
	I0401 10:28:39.337711  446597 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-126557"
	I0401 10:28:39.341896  446597 out.go:177] * Verifying csi-hostpath-driver addon...
	I0401 10:28:39.343767  446597 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0401 10:28:39.346816  446597 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0401 10:28:39.346844  446597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0401 10:28:39.344723  446597 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0401 10:28:39.357383  446597 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0401 10:28:39.357416  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:28:39.441694  446597 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0401 10:28:39.441722  446597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0401 10:28:39.467051  446597 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0401 10:28:39.467084  446597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0401 10:28:39.520296  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:39.523486  446597 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0401 10:28:39.525993  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:28:39.855452  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:28:40.022335  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:40.036333  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:28:40.065952  446597 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.81668138s)
	I0401 10:28:40.353385  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:28:40.534175  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:40.552279  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:28:40.644960  446597 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.121435541s)
	I0401 10:28:40.648004  446597 addons.go:470] Verifying addon gcp-auth=true in "addons-126557"
	I0401 10:28:40.651701  446597 out.go:177] * Verifying gcp-auth addon...
	I0401 10:28:40.651537  446597 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-42fcm" in "kube-system" namespace has status "Ready":"False"
	I0401 10:28:40.654422  446597 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0401 10:28:40.671052  446597 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0401 10:28:40.671123  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:40.853825  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:28:41.018557  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:41.024668  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:28:41.158115  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:41.353237  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:28:41.517589  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:41.523212  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:28:41.658706  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:41.853424  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:28:42.020199  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:42.026249  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:28:42.060007  446597 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-42fcm" in "kube-system" namespace has status "Ready":"True"
	I0401 10:28:42.060136  446597 pod_ready.go:81] duration metric: took 8.010138219s for pod "nvidia-device-plugin-daemonset-42fcm" in "kube-system" namespace to be "Ready" ...
	I0401 10:28:42.060165  446597 pod_ready.go:38] duration metric: took 9.607481954s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 10:28:42.060210  446597 api_server.go:52] waiting for apiserver process to appear ...
	I0401 10:28:42.060314  446597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 10:28:42.110868  446597 api_server.go:72] duration metric: took 12.658300845s to wait for apiserver process to appear ...
	I0401 10:28:42.110950  446597 api_server.go:88] waiting for apiserver healthz status ...
	I0401 10:28:42.111013  446597 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0401 10:28:42.120136  446597 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0401 10:28:42.122343  446597 api_server.go:141] control plane version: v1.29.3
	I0401 10:28:42.122382  446597 api_server.go:131] duration metric: took 11.389168ms to wait for apiserver health ...
	I0401 10:28:42.122396  446597 system_pods.go:43] waiting for kube-system pods to appear ...
	I0401 10:28:42.138615  446597 system_pods.go:59] 18 kube-system pods found
	I0401 10:28:42.138661  446597 system_pods.go:61] "coredns-76f75df574-5npps" [afb08dac-acce-4c7f-814d-4d434efa416d] Running
	I0401 10:28:42.138673  446597 system_pods.go:61] "csi-hostpath-attacher-0" [2496b853-a2ea-4ebc-9147-4bbd24fa5547] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0401 10:28:42.138709  446597 system_pods.go:61] "csi-hostpath-resizer-0" [f8e2e55a-d67a-40c6-af70-3d4b8626884f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0401 10:28:42.138730  446597 system_pods.go:61] "csi-hostpathplugin-dscg6" [fda0a1aa-f6b4-42f5-abb4-391e9fdc73f3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0401 10:28:42.138737  446597 system_pods.go:61] "etcd-addons-126557" [5285f4a8-a53b-4f3a-b20a-389bc4040670] Running
	I0401 10:28:42.138746  446597 system_pods.go:61] "kindnet-dl57l" [9c44660c-5e2b-4ed8-a843-4d07ffb43b0d] Running
	I0401 10:28:42.138751  446597 system_pods.go:61] "kube-apiserver-addons-126557" [90042445-897b-4ef0-a400-632bf55553ae] Running
	I0401 10:28:42.138756  446597 system_pods.go:61] "kube-controller-manager-addons-126557" [f24cab30-8af3-414b-9074-3b5ecd374f86] Running
	I0401 10:28:42.138786  446597 system_pods.go:61] "kube-ingress-dns-minikube" [be4cd3ee-7e81-4dbd-b313-0ce7ac5c10ca] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0401 10:28:42.138799  446597 system_pods.go:61] "kube-proxy-7vv7n" [f61e2422-4055-4640-a3ef-d722d083c801] Running
	I0401 10:28:42.138806  446597 system_pods.go:61] "kube-scheduler-addons-126557" [3ba36472-b7a5-4f6e-8f41-d746522da716] Running
	I0401 10:28:42.138824  446597 system_pods.go:61] "metrics-server-75d6c48ddd-plv9h" [15419844-7761-41fd-90ac-f12c5fcd0fcd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 10:28:42.138839  446597 system_pods.go:61] "nvidia-device-plugin-daemonset-42fcm" [ac4fd004-08f3-4874-9487-b879518c709f] Running
	I0401 10:28:42.138847  446597 system_pods.go:61] "registry-ml2g2" [bbaae877-1e11-468f-888a-9776609aa128] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0401 10:28:42.138879  446597 system_pods.go:61] "registry-proxy-4wwdd" [c2545582-2c57-437a-b39b-294cb4c20eaf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0401 10:28:42.138894  446597 system_pods.go:61] "snapshot-controller-58dbcc7b99-k7s7r" [8b0afebe-edc7-41c0-a48b-2c0d2c07eba8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0401 10:28:42.138902  446597 system_pods.go:61] "snapshot-controller-58dbcc7b99-stzjk" [bf564b2c-cebd-402c-ba7b-b1619dd1aa2a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0401 10:28:42.138911  446597 system_pods.go:61] "storage-provisioner" [06e6bdc6-fcaa-4a57-8edc-a7e010427504] Running
	I0401 10:28:42.138919  446597 system_pods.go:74] duration metric: took 16.516037ms to wait for pod list to return data ...
	I0401 10:28:42.138928  446597 default_sa.go:34] waiting for default service account to be created ...
	I0401 10:28:42.142207  446597 default_sa.go:45] found service account: "default"
	I0401 10:28:42.142418  446597 default_sa.go:55] duration metric: took 3.470968ms for default service account to be created ...
	I0401 10:28:42.142446  446597 system_pods.go:116] waiting for k8s-apps to be running ...
	I0401 10:28:42.165344  446597 system_pods.go:86] 18 kube-system pods found
	I0401 10:28:42.167421  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:42.167497  446597 system_pods.go:89] "coredns-76f75df574-5npps" [afb08dac-acce-4c7f-814d-4d434efa416d] Running
	I0401 10:28:42.167680  446597 system_pods.go:89] "csi-hostpath-attacher-0" [2496b853-a2ea-4ebc-9147-4bbd24fa5547] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0401 10:28:42.167706  446597 system_pods.go:89] "csi-hostpath-resizer-0" [f8e2e55a-d67a-40c6-af70-3d4b8626884f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0401 10:28:42.167718  446597 system_pods.go:89] "csi-hostpathplugin-dscg6" [fda0a1aa-f6b4-42f5-abb4-391e9fdc73f3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0401 10:28:42.167733  446597 system_pods.go:89] "etcd-addons-126557" [5285f4a8-a53b-4f3a-b20a-389bc4040670] Running
	I0401 10:28:42.167745  446597 system_pods.go:89] "kindnet-dl57l" [9c44660c-5e2b-4ed8-a843-4d07ffb43b0d] Running
	I0401 10:28:42.167750  446597 system_pods.go:89] "kube-apiserver-addons-126557" [90042445-897b-4ef0-a400-632bf55553ae] Running
	I0401 10:28:42.167755  446597 system_pods.go:89] "kube-controller-manager-addons-126557" [f24cab30-8af3-414b-9074-3b5ecd374f86] Running
	I0401 10:28:42.167763  446597 system_pods.go:89] "kube-ingress-dns-minikube" [be4cd3ee-7e81-4dbd-b313-0ce7ac5c10ca] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0401 10:28:42.167768  446597 system_pods.go:89] "kube-proxy-7vv7n" [f61e2422-4055-4640-a3ef-d722d083c801] Running
	I0401 10:28:42.167774  446597 system_pods.go:89] "kube-scheduler-addons-126557" [3ba36472-b7a5-4f6e-8f41-d746522da716] Running
	I0401 10:28:42.167797  446597 system_pods.go:89] "metrics-server-75d6c48ddd-plv9h" [15419844-7761-41fd-90ac-f12c5fcd0fcd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 10:28:42.167810  446597 system_pods.go:89] "nvidia-device-plugin-daemonset-42fcm" [ac4fd004-08f3-4874-9487-b879518c709f] Running
	I0401 10:28:42.167822  446597 system_pods.go:89] "registry-ml2g2" [bbaae877-1e11-468f-888a-9776609aa128] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0401 10:28:42.167834  446597 system_pods.go:89] "registry-proxy-4wwdd" [c2545582-2c57-437a-b39b-294cb4c20eaf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0401 10:28:42.167842  446597 system_pods.go:89] "snapshot-controller-58dbcc7b99-k7s7r" [8b0afebe-edc7-41c0-a48b-2c0d2c07eba8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0401 10:28:42.168253  446597 system_pods.go:89] "snapshot-controller-58dbcc7b99-stzjk" [bf564b2c-cebd-402c-ba7b-b1619dd1aa2a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0401 10:28:42.168301  446597 system_pods.go:89] "storage-provisioner" [06e6bdc6-fcaa-4a57-8edc-a7e010427504] Running
	I0401 10:28:42.168321  446597 system_pods.go:126] duration metric: took 25.865742ms to wait for k8s-apps to be running ...
	I0401 10:28:42.168335  446597 system_svc.go:44] waiting for kubelet service to be running ....
	I0401 10:28:42.168404  446597 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 10:28:42.196305  446597 system_svc.go:56] duration metric: took 27.958078ms WaitForService to wait for kubelet
	I0401 10:28:42.196345  446597 kubeadm.go:576] duration metric: took 12.743784819s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 10:28:42.196400  446597 node_conditions.go:102] verifying NodePressure condition ...
	I0401 10:28:42.200979  446597 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0401 10:28:42.201023  446597 node_conditions.go:123] node cpu capacity is 2
	I0401 10:28:42.201037  446597 node_conditions.go:105] duration metric: took 4.629725ms to run NodePressure ...
	I0401 10:28:42.201427  446597 start.go:240] waiting for startup goroutines ...
	I0401 10:28:42.353869  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:28:42.517785  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:42.523156  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:28:42.659030  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:42.853002  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:28:43.018236  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:43.023378  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:28:43.158077  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:43.353128  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:28:43.518459  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:43.523393  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:28:43.658124  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:43.852849  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:28:44.018518  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:44.023860  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:28:44.158462  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:44.353373  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:28:44.518092  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:44.522550  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:28:44.658784  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:44.853127  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:28:45.064146  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:28:45.065612  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:45.169380  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:45.355327  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:28:45.518586  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:45.523527  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:28:45.658538  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:45.853366  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:28:46.017878  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:46.023011  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:28:46.158630  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:46.353326  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:28:46.517523  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:46.523285  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:28:46.662388  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:46.858498  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:28:47.018556  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:47.022975  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:28:47.158199  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:47.353630  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:28:47.517118  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:47.526267  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:28:47.658364  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:47.853162  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:28:48.018508  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:48.023562  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:28:48.158948  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:48.353374  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:28:48.518272  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:48.523080  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:28:48.658905  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:48.854086  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:28:49.018977  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:49.024267  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:28:49.159172  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:49.354676  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:28:49.517537  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:49.527943  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:28:49.660618  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:49.852804  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:28:50.018513  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:50.024001  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:28:50.159814  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:50.352313  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:28:50.517701  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:50.522252  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:28:50.658532  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:50.852751  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:28:51.028366  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:51.029439  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:28:51.158487  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:51.353393  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:28:51.523489  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:51.530104  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:28:51.659440  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:51.853472  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:28:52.022696  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:52.028041  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:28:52.159132  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:52.352838  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:28:52.517380  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:52.523082  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:28:52.658495  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:52.853494  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:28:53.019439  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:53.024984  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:28:53.159085  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:53.353739  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:28:53.518645  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:53.523618  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:28:53.659295  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:53.852327  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:28:54.019137  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:54.027194  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:28:54.158967  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:54.353742  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:28:54.517744  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:54.523326  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 10:28:54.658449  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:54.852689  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:28:55.023896  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:55.027227  446597 kapi.go:107] duration metric: took 17.013983531s to wait for kubernetes.io/minikube-addons=registry ...
	I0401 10:28:55.158479  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:55.352632  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:28:55.518073  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:55.659417  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:55.853168  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:28:56.018076  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:56.159052  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:56.359586  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:28:56.522133  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:56.658940  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:56.855910  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:28:57.020243  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:57.159186  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:57.360278  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:28:57.519629  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:57.659631  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:57.859371  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:28:58.020687  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:58.159026  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:58.357620  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:28:58.530107  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:58.672602  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:58.855544  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:28:59.018923  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:59.158958  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:59.352568  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:28:59.521093  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:28:59.658919  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:28:59.853281  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:29:00.090902  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:29:00.173399  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:29:00.360151  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:29:00.519634  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:29:00.663042  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:29:00.862580  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:29:01.020743  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:29:01.159310  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:29:01.353470  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:29:01.518381  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:29:01.658325  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:29:01.853215  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:29:02.017872  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:29:02.158762  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:29:02.352745  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:29:02.517258  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:29:02.658931  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:29:02.855541  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:29:03.018786  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:29:03.158594  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:29:03.353233  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:29:03.517285  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:29:03.659199  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:29:03.855633  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:29:04.021303  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:29:04.159194  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:29:04.352987  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:29:04.517444  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:29:04.658422  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:29:04.852706  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:29:05.018202  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:29:05.159239  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:29:05.352931  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:29:05.531344  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:29:05.658553  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:29:05.857211  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:29:06.018522  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:29:06.158345  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:29:06.352148  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:29:06.517983  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:29:06.658904  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:29:06.853186  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:29:07.019097  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:29:07.158856  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:29:07.352472  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:29:07.517571  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:29:07.658301  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:29:07.853209  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:29:08.017434  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:29:08.158514  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:29:08.353039  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:29:08.517760  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:29:08.658474  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:29:08.853551  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:29:09.020316  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:29:09.157748  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:29:09.352473  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:29:09.517902  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:29:09.658610  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:29:09.856138  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:29:10.018927  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:29:10.159039  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:29:10.355827  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:29:10.517185  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:29:10.659021  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:29:10.852643  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:29:11.018447  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:29:11.162903  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:29:11.354203  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:29:11.519742  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:29:11.658313  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:29:11.853999  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:29:12.017639  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:29:12.159414  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:29:12.352757  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:29:12.520783  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:29:12.658991  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:29:12.853114  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:29:13.019347  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:29:13.163080  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:29:13.353844  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:29:13.517808  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:29:13.658805  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:29:13.853780  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:29:14.018202  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:29:14.159008  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:29:14.353063  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:29:14.517345  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:29:14.658504  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:29:14.852903  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:29:15.030742  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:29:15.159439  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:29:15.353131  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:29:15.517723  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:29:15.658996  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:29:15.853232  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:29:16.019276  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:29:16.159065  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:29:16.370997  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:29:16.518560  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:29:16.658675  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:29:16.853007  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:29:17.017371  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:29:17.158455  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:29:17.354084  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:29:17.517564  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:29:17.658539  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:29:17.853791  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:29:18.018858  446597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 10:29:18.158439  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:29:18.353225  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:29:18.517696  446597 kapi.go:107] duration metric: took 40.50484771s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0401 10:29:18.658181  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:29:18.853127  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:29:19.158256  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:29:19.352643  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:29:19.659245  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:29:19.853556  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:29:20.159544  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:29:20.353035  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:29:20.659604  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 10:29:20.853376  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:29:21.166681  446597 kapi.go:107] duration metric: took 40.512257101s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0401 10:29:21.169116  446597 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-126557 cluster.
	I0401 10:29:21.171852  446597 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0401 10:29:21.174036  446597 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0401 10:29:21.353443  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:29:21.853928  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:29:22.352659  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:29:22.853567  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:29:23.352471  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:29:23.852247  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:29:24.354421  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:29:24.852310  446597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 10:29:25.352835  446597 kapi.go:107] duration metric: took 46.008109599s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0401 10:29:25.355099  446597 out.go:177] * Enabled addons: ingress-dns, storage-provisioner, nvidia-device-plugin, cloud-spanner, inspektor-gadget, yakd, metrics-server, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0401 10:29:25.357332  446597 addons.go:505] duration metric: took 55.904547628s for enable addons: enabled=[ingress-dns storage-provisioner nvidia-device-plugin cloud-spanner inspektor-gadget yakd metrics-server storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0401 10:29:25.357383  446597 start.go:245] waiting for cluster config update ...
	I0401 10:29:25.357404  446597 start.go:254] writing updated cluster config ...
	I0401 10:29:25.357710  446597 ssh_runner.go:195] Run: rm -f paused
	I0401 10:29:25.695587  446597 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0401 10:29:25.698181  446597 out.go:177] * Done! kubectl is now configured to use "addons-126557" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	470e1f169962d       dd1b12fcb6097       9 seconds ago       Exited              hello-world-app           2                   643be5bbaf6c4       hello-world-app-5d77478584-5v6jv
	2d16321f17aca       b8c82647e8a25       33 seconds ago      Running             nginx                     0                   367ba99d7bfbf       nginx
	c3f963c707035       29799520898e7       43 seconds ago      Running             headlamp                  0                   96a822e1c2d5c       headlamp-5b77dbd7c4-nl4s4
	29bfc5c4bcf46       6ef582f3ec844       2 minutes ago       Running             gcp-auth                  0                   94bc9c1f4e4e9       gcp-auth-7d69788767-lxhcw
	5f6bbefb14661       1a024e390dd05       2 minutes ago       Exited              patch                     0                   51a899b909e32       ingress-nginx-admission-patch-5jdzk
	ac58565f65db2       1a024e390dd05       2 minutes ago       Exited              create                    0                   306375f9fdf0b       ingress-nginx-admission-create-gws2j
	5a757453d61d9       20e3f2db01e81       2 minutes ago       Running             yakd                      0                   37ae5359a72d0       yakd-dashboard-9947fc6bf-f7x77
	f98eca9d86666       ba04bb24b9575       2 minutes ago       Running             storage-provisioner       0                   6fc97974b0629       storage-provisioner
	65220ddf68128       2437cf7621777       2 minutes ago       Running             coredns                   0                   ca794a40bd2f6       coredns-76f75df574-5npps
	ddafea8d880ba       4740c1948d3fc       2 minutes ago       Running             kindnet-cni               0                   78e760d101a8a       kindnet-dl57l
	93e16bca3b2c9       0e9b4a0d1e86d       2 minutes ago       Running             kube-proxy                0                   bd9866508e944       kube-proxy-7vv7n
	712e13891bcd7       014faa467e297       3 minutes ago       Running             etcd                      0                   eddb643ffdc73       etcd-addons-126557
	6b44715270cf1       4b51f9f6bc9b9       3 minutes ago       Running             kube-scheduler            0                   d6aa44e62bb0e       kube-scheduler-addons-126557
	93d075932a570       2581114f5709d       3 minutes ago       Running             kube-apiserver            0                   6c56da1b0bc54       kube-apiserver-addons-126557
	17719dfe11d7d       121d70d9a3805       3 minutes ago       Running             kube-controller-manager   0                   c9298fef10145       kube-controller-manager-addons-126557
	
	
	==> containerd <==
	Apr 01 10:31:17 addons-126557 containerd[772]: time="2024-04-01T10:31:17.300552874Z" level=info msg="cleaning up dead shim"
	Apr 01 10:31:17 addons-126557 containerd[772]: time="2024-04-01T10:31:17.308519049Z" level=warning msg="cleanup warnings time=\"2024-04-01T10:31:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=10748 runtime=io.containerd.runc.v2\n"
	Apr 01 10:31:17 addons-126557 containerd[772]: time="2024-04-01T10:31:17.735178081Z" level=info msg="RemoveContainer for \"2a631faecd8b5f7000af43b08a74d92c6ef996411acb8ef4a7ce07e61e9f3e7a\""
	Apr 01 10:31:17 addons-126557 containerd[772]: time="2024-04-01T10:31:17.742309438Z" level=info msg="RemoveContainer for \"2a631faecd8b5f7000af43b08a74d92c6ef996411acb8ef4a7ce07e61e9f3e7a\" returns successfully"
	Apr 01 10:31:17 addons-126557 containerd[772]: time="2024-04-01T10:31:17.744528845Z" level=info msg="RemoveContainer for \"095e0fae12aa0bd754c759d4bfad58ff4ea48f5a976d684e3b8b97f9b69dff50\""
	Apr 01 10:31:17 addons-126557 containerd[772]: time="2024-04-01T10:31:17.767044257Z" level=info msg="RemoveContainer for \"095e0fae12aa0bd754c759d4bfad58ff4ea48f5a976d684e3b8b97f9b69dff50\" returns successfully"
	Apr 01 10:31:19 addons-126557 containerd[772]: time="2024-04-01T10:31:19.461538278Z" level=info msg="StopContainer for \"9d387bf536e0e2cb9220701f8e95176c53240fb0c48bcc688d0ffb22a282c1c2\" with timeout 2 (s)"
	Apr 01 10:31:19 addons-126557 containerd[772]: time="2024-04-01T10:31:19.462136302Z" level=info msg="Stop container \"9d387bf536e0e2cb9220701f8e95176c53240fb0c48bcc688d0ffb22a282c1c2\" with signal terminated"
	Apr 01 10:31:21 addons-126557 containerd[772]: time="2024-04-01T10:31:21.468425600Z" level=info msg="Kill container \"9d387bf536e0e2cb9220701f8e95176c53240fb0c48bcc688d0ffb22a282c1c2\""
	Apr 01 10:31:21 addons-126557 containerd[772]: time="2024-04-01T10:31:21.539394374Z" level=info msg="shim disconnected" id=9d387bf536e0e2cb9220701f8e95176c53240fb0c48bcc688d0ffb22a282c1c2
	Apr 01 10:31:21 addons-126557 containerd[772]: time="2024-04-01T10:31:21.539679847Z" level=warning msg="cleaning up after shim disconnected" id=9d387bf536e0e2cb9220701f8e95176c53240fb0c48bcc688d0ffb22a282c1c2 namespace=k8s.io
	Apr 01 10:31:21 addons-126557 containerd[772]: time="2024-04-01T10:31:21.539703716Z" level=info msg="cleaning up dead shim"
	Apr 01 10:31:21 addons-126557 containerd[772]: time="2024-04-01T10:31:21.548054537Z" level=warning msg="cleanup warnings time=\"2024-04-01T10:31:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=10842 runtime=io.containerd.runc.v2\n"
	Apr 01 10:31:21 addons-126557 containerd[772]: time="2024-04-01T10:31:21.551239770Z" level=info msg="StopContainer for \"9d387bf536e0e2cb9220701f8e95176c53240fb0c48bcc688d0ffb22a282c1c2\" returns successfully"
	Apr 01 10:31:21 addons-126557 containerd[772]: time="2024-04-01T10:31:21.551793192Z" level=info msg="StopPodSandbox for \"348403890578ee70d4c44712cd80e5fe8118fae042189494325b619044b479d7\""
	Apr 01 10:31:21 addons-126557 containerd[772]: time="2024-04-01T10:31:21.551849576Z" level=info msg="Container to stop \"9d387bf536e0e2cb9220701f8e95176c53240fb0c48bcc688d0ffb22a282c1c2\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Apr 01 10:31:21 addons-126557 containerd[772]: time="2024-04-01T10:31:21.580097004Z" level=info msg="shim disconnected" id=348403890578ee70d4c44712cd80e5fe8118fae042189494325b619044b479d7
	Apr 01 10:31:21 addons-126557 containerd[772]: time="2024-04-01T10:31:21.580299018Z" level=warning msg="cleaning up after shim disconnected" id=348403890578ee70d4c44712cd80e5fe8118fae042189494325b619044b479d7 namespace=k8s.io
	Apr 01 10:31:21 addons-126557 containerd[772]: time="2024-04-01T10:31:21.580386892Z" level=info msg="cleaning up dead shim"
	Apr 01 10:31:21 addons-126557 containerd[772]: time="2024-04-01T10:31:21.589967796Z" level=warning msg="cleanup warnings time=\"2024-04-01T10:31:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=10876 runtime=io.containerd.runc.v2\n"
	Apr 01 10:31:21 addons-126557 containerd[772]: time="2024-04-01T10:31:21.637514906Z" level=info msg="TearDown network for sandbox \"348403890578ee70d4c44712cd80e5fe8118fae042189494325b619044b479d7\" successfully"
	Apr 01 10:31:21 addons-126557 containerd[772]: time="2024-04-01T10:31:21.637578511Z" level=info msg="StopPodSandbox for \"348403890578ee70d4c44712cd80e5fe8118fae042189494325b619044b479d7\" returns successfully"
	Apr 01 10:31:21 addons-126557 containerd[772]: time="2024-04-01T10:31:21.749193382Z" level=info msg="RemoveContainer for \"9d387bf536e0e2cb9220701f8e95176c53240fb0c48bcc688d0ffb22a282c1c2\""
	Apr 01 10:31:21 addons-126557 containerd[772]: time="2024-04-01T10:31:21.754382396Z" level=info msg="RemoveContainer for \"9d387bf536e0e2cb9220701f8e95176c53240fb0c48bcc688d0ffb22a282c1c2\" returns successfully"
	Apr 01 10:31:21 addons-126557 containerd[772]: time="2024-04-01T10:31:21.755008809Z" level=error msg="ContainerStatus for \"9d387bf536e0e2cb9220701f8e95176c53240fb0c48bcc688d0ffb22a282c1c2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9d387bf536e0e2cb9220701f8e95176c53240fb0c48bcc688d0ffb22a282c1c2\": not found"
	
	
	==> coredns [65220ddf681287e588e4b59d2b82d08ec20a6adbcf75c97bfd5c8272ca78028f] <==
	[INFO] 10.244.0.20:54455 - 8507 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000064245s
	[INFO] 10.244.0.20:54455 - 27438 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000061316s
	[INFO] 10.244.0.20:54455 - 5375 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000098008s
	[INFO] 10.244.0.20:54455 - 10506 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000054374s
	[INFO] 10.244.0.20:54455 - 36764 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.005814951s
	[INFO] 10.244.0.20:54455 - 52670 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.0014414s
	[INFO] 10.244.0.20:54455 - 24670 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000083813s
	[INFO] 10.244.0.20:36712 - 9945 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000114656s
	[INFO] 10.244.0.20:34688 - 4276 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000126044s
	[INFO] 10.244.0.20:34688 - 28829 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000153851s
	[INFO] 10.244.0.20:36712 - 52071 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000153153s
	[INFO] 10.244.0.20:36712 - 33188 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00014183s
	[INFO] 10.244.0.20:34688 - 42040 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000129958s
	[INFO] 10.244.0.20:36712 - 64899 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000065368s
	[INFO] 10.244.0.20:36712 - 16556 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000075059s
	[INFO] 10.244.0.20:34688 - 65129 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000175052s
	[INFO] 10.244.0.20:36712 - 34584 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000038145s
	[INFO] 10.244.0.20:34688 - 53690 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000128497s
	[INFO] 10.244.0.20:34688 - 6036 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000037907s
	[INFO] 10.244.0.20:34688 - 28340 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000815561s
	[INFO] 10.244.0.20:36712 - 39050 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001575246s
	[INFO] 10.244.0.20:34688 - 23895 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001316086s
	[INFO] 10.244.0.20:34688 - 1127 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000049558s
	[INFO] 10.244.0.20:36712 - 1017 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001459771s
	[INFO] 10.244.0.20:36712 - 59464 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000076043s
	
	
	==> describe nodes <==
	Name:               addons-126557
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-126557
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8aa0d860b7e6047018bc1a9124397cd2c931e0d
	                    minikube.k8s.io/name=addons-126557
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_01T10_28_16_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-126557
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Apr 2024 10:28:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-126557
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Apr 2024 10:31:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Apr 2024 10:31:19 +0000   Mon, 01 Apr 2024 10:28:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Apr 2024 10:31:19 +0000   Mon, 01 Apr 2024 10:28:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Apr 2024 10:31:19 +0000   Mon, 01 Apr 2024 10:28:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Apr 2024 10:31:19 +0000   Mon, 01 Apr 2024 10:28:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-126557
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022560Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022560Ki
	  pods:               110
	System Info:
	  Machine ID:                 663dee7d95ad4ccca23ab1fb6495e324
	  System UUID:                7744c238-163a-4b9c-9096-8dba2120097a
	  Boot ID:                    2e0ae28a-b3da-4fcf-af6c-d595b2697792
	  Kernel Version:             5.15.0-1056-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.28
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-5v6jv         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         35s
	  gcp-auth                    gcp-auth-7d69788767-lxhcw                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m46s
	  headlamp                    headlamp-5b77dbd7c4-nl4s4                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         47s
	  kube-system                 coredns-76f75df574-5npps                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     2m57s
	  kube-system                 etcd-addons-126557                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         3m10s
	  kube-system                 kindnet-dl57l                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      2m57s
	  kube-system                 kube-apiserver-addons-126557             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m10s
	  kube-system                 kube-controller-manager-addons-126557    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m10s
	  kube-system                 kube-proxy-7vv7n                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m57s
	  kube-system                 kube-scheduler-addons-126557             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m10s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m52s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-f7x77           0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     2m51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             348Mi (4%!)(MISSING)  476Mi (6%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 2m56s  kube-proxy       
	  Normal  Starting                 3m10s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m10s  kubelet          Node addons-126557 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m10s  kubelet          Node addons-126557 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m10s  kubelet          Node addons-126557 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             3m10s  kubelet          Node addons-126557 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  3m10s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m     kubelet          Node addons-126557 status is now: NodeReady
	  Normal  RegisteredNode           2m57s  node-controller  Node addons-126557 event: Registered Node addons-126557 in Controller
	
	
	==> dmesg <==
	[  +0.000927] FS-Cache: N-cookie d=00000000fed3411a{9p.inode} n=00000000d3714c05
	[  +0.001066] FS-Cache: N-key=[8] '866ced0000000000'
	[  +0.014698] FS-Cache: Duplicate cookie detected
	[  +0.000730] FS-Cache: O-cookie c=00000006 [p=00000003 fl=226 nc=0 na=1]
	[  +0.001043] FS-Cache: O-cookie d=00000000fed3411a{9p.inode} n=000000008ea0aaf4
	[  +0.001057] FS-Cache: O-key=[8] '866ced0000000000'
	[  +0.000704] FS-Cache: N-cookie c=0000000d [p=00000003 fl=2 nc=0 na=1]
	[  +0.000938] FS-Cache: N-cookie d=00000000fed3411a{9p.inode} n=000000000c049b38
	[  +0.001080] FS-Cache: N-key=[8] '866ced0000000000'
	[  +3.601782] FS-Cache: Duplicate cookie detected
	[  +0.000780] FS-Cache: O-cookie c=00000004 [p=00000003 fl=226 nc=0 na=1]
	[  +0.001216] FS-Cache: O-cookie d=00000000fed3411a{9p.inode} n=000000001dfb3c8a
	[  +0.001271] FS-Cache: O-key=[8] '856ced0000000000'
	[  +0.000745] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.001083] FS-Cache: N-cookie d=00000000fed3411a{9p.inode} n=00000000d3714c05
	[  +0.001310] FS-Cache: N-key=[8] '856ced0000000000'
	[  +0.350018] FS-Cache: Duplicate cookie detected
	[  +0.000793] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.001121] FS-Cache: O-cookie d=00000000fed3411a{9p.inode} n=00000000174db64c
	[  +0.001120] FS-Cache: O-key=[8] '906ced0000000000'
	[  +0.000777] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.000955] FS-Cache: N-cookie d=00000000fed3411a{9p.inode} n=00000000a2a76256
	[  +0.001171] FS-Cache: N-key=[8] '906ced0000000000'
	[Apr 1 09:55] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Apr 1 10:02] systemd-journald[216]: Failed to send stream file descriptor to service manager: Connection refused
	
	
	==> etcd [712e13891bcd7fa4e2f6539756e5b74781a9d9a4dca9d48f32c507acb4c250bf] <==
	{"level":"info","ts":"2024-04-01T10:28:09.526097Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-04-01T10:28:09.526211Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-04-01T10:28:09.55832Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-01T10:28:09.558523Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-01T10:28:09.558548Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-01T10:28:09.558647Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-04-01T10:28:09.558659Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-04-01T10:28:10.305145Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-04-01T10:28:10.305202Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-04-01T10:28:10.305219Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-04-01T10:28:10.305245Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-04-01T10:28:10.305312Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-04-01T10:28:10.305344Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-04-01T10:28:10.305406Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-04-01T10:28:10.309164Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-01T10:28:10.313291Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-126557 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-01T10:28:10.313459Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-01T10:28:10.31537Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-04-01T10:28:10.315904Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-01T10:28:10.319641Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-01T10:28:10.32961Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-01T10:28:10.32977Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-01T10:28:10.338106Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-01T10:28:10.320089Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-01T10:28:10.338477Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> gcp-auth [29bfc5c4bcf46dfcb83fb873fa74299d027cb71eb8d88ed99cf474af53d41e44] <==
	2024/04/01 10:29:20 GCP Auth Webhook started!
	2024/04/01 10:29:35 Ready to marshal response ...
	2024/04/01 10:29:35 Ready to write response ...
	2024/04/01 10:29:52 Ready to marshal response ...
	2024/04/01 10:29:52 Ready to write response ...
	2024/04/01 10:29:52 Ready to marshal response ...
	2024/04/01 10:29:52 Ready to write response ...
	2024/04/01 10:30:01 Ready to marshal response ...
	2024/04/01 10:30:01 Ready to write response ...
	2024/04/01 10:30:01 Ready to marshal response ...
	2024/04/01 10:30:01 Ready to write response ...
	2024/04/01 10:30:17 Ready to marshal response ...
	2024/04/01 10:30:17 Ready to write response ...
	2024/04/01 10:30:39 Ready to marshal response ...
	2024/04/01 10:30:39 Ready to write response ...
	2024/04/01 10:30:39 Ready to marshal response ...
	2024/04/01 10:30:39 Ready to write response ...
	2024/04/01 10:30:39 Ready to marshal response ...
	2024/04/01 10:30:39 Ready to write response ...
	2024/04/01 10:30:51 Ready to marshal response ...
	2024/04/01 10:30:51 Ready to write response ...
	2024/04/01 10:31:00 Ready to marshal response ...
	2024/04/01 10:31:00 Ready to write response ...
	
	
	==> kernel <==
	 10:31:27 up  2:13,  0 users,  load average: 1.81, 2.19, 2.98
	Linux addons-126557 5.15.0-1056-aws #61~20.04.1-Ubuntu SMP Wed Mar 13 17:45:04 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [ddafea8d880ba074bf3646139df82e6fc1126871cbf659fbc4b292716431094f] <==
	I0401 10:29:20.749893       1 main.go:227] handling current node
	I0401 10:29:30.758319       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0401 10:29:30.758350       1 main.go:227] handling current node
	I0401 10:29:40.769793       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0401 10:29:40.769822       1 main.go:227] handling current node
	I0401 10:29:50.778742       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0401 10:29:50.778774       1 main.go:227] handling current node
	I0401 10:30:00.790566       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0401 10:30:00.790665       1 main.go:227] handling current node
	I0401 10:30:10.806092       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0401 10:30:10.806130       1 main.go:227] handling current node
	I0401 10:30:20.815193       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0401 10:30:20.815222       1 main.go:227] handling current node
	I0401 10:30:30.824581       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0401 10:30:30.824619       1 main.go:227] handling current node
	I0401 10:30:40.843658       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0401 10:30:40.843699       1 main.go:227] handling current node
	I0401 10:30:50.852584       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0401 10:30:50.852611       1 main.go:227] handling current node
	I0401 10:31:00.877489       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0401 10:31:00.877576       1 main.go:227] handling current node
	I0401 10:31:10.882055       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0401 10:31:10.882084       1 main.go:227] handling current node
	I0401 10:31:20.894630       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0401 10:31:20.894655       1 main.go:227] handling current node
	
	
	==> kube-apiserver [93d075932a5705727a45f627d5371a81882ba6e89f2f9aabc00b1bb2d23ee90e] <==
	E0401 10:29:05.461964       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.49.167:443/apis/metrics.k8s.io/v1beta1: Get "https://10.96.49.167:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.96.49.167:443: connect: connection refused
	I0401 10:29:05.583927       1 handler.go:275] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0401 10:29:39.207728       1 watch.go:253] http2: stream closed
	I0401 10:30:09.508032       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0401 10:30:17.313725       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0401 10:30:32.261677       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0401 10:30:32.261813       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0401 10:30:32.299027       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0401 10:30:32.299081       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0401 10:30:32.320415       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0401 10:30:32.320463       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0401 10:30:32.357866       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0401 10:30:32.357914       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0401 10:30:32.397489       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0401 10:30:32.397695       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0401 10:30:33.320397       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0401 10:30:33.397526       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0401 10:30:33.420542       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0401 10:30:39.196221       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.106.151.145"}
	I0401 10:30:50.913866       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0401 10:30:51.210153       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.103.61.250"}
	I0401 10:30:55.839939       1 handler.go:275] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0401 10:30:56.874552       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0401 10:31:01.088864       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.98.71.204"}
	I0401 10:31:06.468789       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	
	
	==> kube-controller-manager [17719dfe11d7d1c3a67232ab013a54dfb0b2fa0c986cb99026e107a7b4bd6c8f] <==
	E0401 10:31:00.677311       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0401 10:31:00.838585       1 event.go:376] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0401 10:31:00.865166       1 event.go:376] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-5v6jv"
	I0401 10:31:00.891006       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="51.9027ms"
	I0401 10:31:00.925251       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="34.187454ms"
	I0401 10:31:00.948296       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="22.991347ms"
	I0401 10:31:00.948414       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="73.434µs"
	I0401 10:31:03.698676       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="36.799µs"
	W0401 10:31:04.406258       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0401 10:31:04.406296       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0401 10:31:04.704371       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="59.822µs"
	I0401 10:31:05.706069       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="39.531µs"
	I0401 10:31:06.196643       1 namespace_controller.go:182] "Namespace has been deleted" namespace="gadget"
	W0401 10:31:07.501736       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0401 10:31:07.501770       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0401 10:31:09.310854       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0401 10:31:09.310890       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0401 10:31:11.349899       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0401 10:31:11.349937       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0401 10:31:12.070435       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0401 10:31:12.070537       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0401 10:31:17.738458       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="37.653µs"
	I0401 10:31:18.433641       1 job_controller.go:554] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0401 10:31:18.440200       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-65496f9567" duration="4.644µs"
	I0401 10:31:18.452198       1 job_controller.go:554] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	
	
	==> kube-proxy [93e16bca3b2c9f6ef089e87cfda46247815b49e0068a3546275efb934baa15a5] <==
	I0401 10:28:30.359252       1 server_others.go:72] "Using iptables proxy"
	I0401 10:28:30.382344       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0401 10:28:30.448263       1 server.go:652] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0401 10:28:30.448296       1 server_others.go:168] "Using iptables Proxier"
	I0401 10:28:30.450378       1 server_others.go:512] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0401 10:28:30.450401       1 server_others.go:529] "Defaulting to no-op detect-local"
	I0401 10:28:30.450431       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0401 10:28:30.450685       1 server.go:865] "Version info" version="v1.29.3"
	I0401 10:28:30.450697       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 10:28:30.458941       1 config.go:188] "Starting service config controller"
	I0401 10:28:30.458971       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0401 10:28:30.459000       1 config.go:97] "Starting endpoint slice config controller"
	I0401 10:28:30.459004       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0401 10:28:30.462563       1 config.go:315] "Starting node config controller"
	I0401 10:28:30.462584       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0401 10:28:30.559045       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0401 10:28:30.559101       1 shared_informer.go:318] Caches are synced for service config
	I0401 10:28:30.564333       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [6b44715270cf18e271c6847c501e022d6305cf332d06d0f466ea61913a1374a9] <==
	W0401 10:28:13.043140       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0401 10:28:13.043183       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0401 10:28:13.043305       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0401 10:28:13.043395       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0401 10:28:13.043542       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0401 10:28:13.043563       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0401 10:28:13.043730       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0401 10:28:13.043753       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0401 10:28:13.044092       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0401 10:28:13.044156       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0401 10:28:13.923889       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0401 10:28:13.924162       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0401 10:28:14.069956       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0401 10:28:14.070009       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0401 10:28:14.134424       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0401 10:28:14.134528       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0401 10:28:14.191668       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0401 10:28:14.191738       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0401 10:28:14.216051       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0401 10:28:14.216283       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0401 10:28:14.226602       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0401 10:28:14.226846       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0401 10:28:14.248957       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0401 10:28:14.248997       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0401 10:28:14.621494       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 01 10:31:05 addons-126557 kubelet[1501]: E0401 10:31:05.695556    1501 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5d77478584-5v6jv_default(3da52033-5d86-4e0a-a9d3-3d81cda92a7d)\"" pod="default/hello-world-app-5d77478584-5v6jv" podUID="3da52033-5d86-4e0a-a9d3-3d81cda92a7d"
	Apr 01 10:31:07 addons-126557 kubelet[1501]: I0401 10:31:07.201956    1501 scope.go:117] "RemoveContainer" containerID="095e0fae12aa0bd754c759d4bfad58ff4ea48f5a976d684e3b8b97f9b69dff50"
	Apr 01 10:31:07 addons-126557 kubelet[1501]: E0401 10:31:07.202269    1501 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(be4cd3ee-7e81-4dbd-b313-0ce7ac5c10ca)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="be4cd3ee-7e81-4dbd-b313-0ce7ac5c10ca"
	Apr 01 10:31:17 addons-126557 kubelet[1501]: I0401 10:31:17.048596    1501 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkqcr\" (UniqueName: \"kubernetes.io/projected/be4cd3ee-7e81-4dbd-b313-0ce7ac5c10ca-kube-api-access-xkqcr\") pod \"be4cd3ee-7e81-4dbd-b313-0ce7ac5c10ca\" (UID: \"be4cd3ee-7e81-4dbd-b313-0ce7ac5c10ca\") "
	Apr 01 10:31:17 addons-126557 kubelet[1501]: I0401 10:31:17.053717    1501 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be4cd3ee-7e81-4dbd-b313-0ce7ac5c10ca-kube-api-access-xkqcr" (OuterVolumeSpecName: "kube-api-access-xkqcr") pod "be4cd3ee-7e81-4dbd-b313-0ce7ac5c10ca" (UID: "be4cd3ee-7e81-4dbd-b313-0ce7ac5c10ca"). InnerVolumeSpecName "kube-api-access-xkqcr". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 01 10:31:17 addons-126557 kubelet[1501]: I0401 10:31:17.149986    1501 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-xkqcr\" (UniqueName: \"kubernetes.io/projected/be4cd3ee-7e81-4dbd-b313-0ce7ac5c10ca-kube-api-access-xkqcr\") on node \"addons-126557\" DevicePath \"\""
	Apr 01 10:31:17 addons-126557 kubelet[1501]: I0401 10:31:17.201527    1501 scope.go:117] "RemoveContainer" containerID="2a631faecd8b5f7000af43b08a74d92c6ef996411acb8ef4a7ce07e61e9f3e7a"
	Apr 01 10:31:17 addons-126557 kubelet[1501]: I0401 10:31:17.721730    1501 scope.go:117] "RemoveContainer" containerID="2a631faecd8b5f7000af43b08a74d92c6ef996411acb8ef4a7ce07e61e9f3e7a"
	Apr 01 10:31:17 addons-126557 kubelet[1501]: I0401 10:31:17.722116    1501 scope.go:117] "RemoveContainer" containerID="470e1f169962daf5db3a94003edfeabd79046180a5875e64873b7e1d72be924f"
	Apr 01 10:31:17 addons-126557 kubelet[1501]: E0401 10:31:17.722383    1501 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5d77478584-5v6jv_default(3da52033-5d86-4e0a-a9d3-3d81cda92a7d)\"" pod="default/hello-world-app-5d77478584-5v6jv" podUID="3da52033-5d86-4e0a-a9d3-3d81cda92a7d"
	Apr 01 10:31:17 addons-126557 kubelet[1501]: I0401 10:31:17.743399    1501 scope.go:117] "RemoveContainer" containerID="095e0fae12aa0bd754c759d4bfad58ff4ea48f5a976d684e3b8b97f9b69dff50"
	Apr 01 10:31:18 addons-126557 kubelet[1501]: I0401 10:31:18.204223    1501 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be4cd3ee-7e81-4dbd-b313-0ce7ac5c10ca" path="/var/lib/kubelet/pods/be4cd3ee-7e81-4dbd-b313-0ce7ac5c10ca/volumes"
	Apr 01 10:31:20 addons-126557 kubelet[1501]: I0401 10:31:20.204459    1501 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60b98a38-5af2-46c1-a59f-773103406776" path="/var/lib/kubelet/pods/60b98a38-5af2-46c1-a59f-773103406776/volumes"
	Apr 01 10:31:20 addons-126557 kubelet[1501]: I0401 10:31:20.204846    1501 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba19ce81-22d6-4374-974b-64d7287ec9eb" path="/var/lib/kubelet/pods/ba19ce81-22d6-4374-974b-64d7287ec9eb/volumes"
	Apr 01 10:31:21 addons-126557 kubelet[1501]: I0401 10:31:21.747428    1501 scope.go:117] "RemoveContainer" containerID="9d387bf536e0e2cb9220701f8e95176c53240fb0c48bcc688d0ffb22a282c1c2"
	Apr 01 10:31:21 addons-126557 kubelet[1501]: I0401 10:31:21.754726    1501 scope.go:117] "RemoveContainer" containerID="9d387bf536e0e2cb9220701f8e95176c53240fb0c48bcc688d0ffb22a282c1c2"
	Apr 01 10:31:21 addons-126557 kubelet[1501]: E0401 10:31:21.755233    1501 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9d387bf536e0e2cb9220701f8e95176c53240fb0c48bcc688d0ffb22a282c1c2\": not found" containerID="9d387bf536e0e2cb9220701f8e95176c53240fb0c48bcc688d0ffb22a282c1c2"
	Apr 01 10:31:21 addons-126557 kubelet[1501]: I0401 10:31:21.755283    1501 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9d387bf536e0e2cb9220701f8e95176c53240fb0c48bcc688d0ffb22a282c1c2"} err="failed to get container status \"9d387bf536e0e2cb9220701f8e95176c53240fb0c48bcc688d0ffb22a282c1c2\": rpc error: code = NotFound desc = an error occurred when try to find container \"9d387bf536e0e2cb9220701f8e95176c53240fb0c48bcc688d0ffb22a282c1c2\": not found"
	Apr 01 10:31:21 addons-126557 kubelet[1501]: I0401 10:31:21.792762    1501 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/47c9253e-9270-4e51-a66a-92cd0b914bef-webhook-cert\") pod \"47c9253e-9270-4e51-a66a-92cd0b914bef\" (UID: \"47c9253e-9270-4e51-a66a-92cd0b914bef\") "
	Apr 01 10:31:21 addons-126557 kubelet[1501]: I0401 10:31:21.792841    1501 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cv9b8\" (UniqueName: \"kubernetes.io/projected/47c9253e-9270-4e51-a66a-92cd0b914bef-kube-api-access-cv9b8\") pod \"47c9253e-9270-4e51-a66a-92cd0b914bef\" (UID: \"47c9253e-9270-4e51-a66a-92cd0b914bef\") "
	Apr 01 10:31:21 addons-126557 kubelet[1501]: I0401 10:31:21.796226    1501 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47c9253e-9270-4e51-a66a-92cd0b914bef-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "47c9253e-9270-4e51-a66a-92cd0b914bef" (UID: "47c9253e-9270-4e51-a66a-92cd0b914bef"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Apr 01 10:31:21 addons-126557 kubelet[1501]: I0401 10:31:21.796727    1501 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47c9253e-9270-4e51-a66a-92cd0b914bef-kube-api-access-cv9b8" (OuterVolumeSpecName: "kube-api-access-cv9b8") pod "47c9253e-9270-4e51-a66a-92cd0b914bef" (UID: "47c9253e-9270-4e51-a66a-92cd0b914bef"). InnerVolumeSpecName "kube-api-access-cv9b8". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 01 10:31:21 addons-126557 kubelet[1501]: I0401 10:31:21.894064    1501 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/47c9253e-9270-4e51-a66a-92cd0b914bef-webhook-cert\") on node \"addons-126557\" DevicePath \"\""
	Apr 01 10:31:21 addons-126557 kubelet[1501]: I0401 10:31:21.894110    1501 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-cv9b8\" (UniqueName: \"kubernetes.io/projected/47c9253e-9270-4e51-a66a-92cd0b914bef-kube-api-access-cv9b8\") on node \"addons-126557\" DevicePath \"\""
	Apr 01 10:31:22 addons-126557 kubelet[1501]: I0401 10:31:22.204532    1501 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47c9253e-9270-4e51-a66a-92cd0b914bef" path="/var/lib/kubelet/pods/47c9253e-9270-4e51-a66a-92cd0b914bef/volumes"
	
	
	==> storage-provisioner [f98eca9d86666e35ab137885fce647614efbe3f863da3e06c41e44c2cacb07b1] <==
	I0401 10:28:36.074815       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0401 10:28:36.102080       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0401 10:28:36.102128       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0401 10:28:36.125381       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0401 10:28:36.128708       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3380caa7-4734-4295-af95-ddb00ac18f6c", APIVersion:"v1", ResourceVersion:"590", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-126557_4ca4523e-a559-4bb3-86de-d3150f6af0e2 became leader
	I0401 10:28:36.129346       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-126557_4ca4523e-a559-4bb3-86de-d3150f6af0e2!
	I0401 10:28:36.231137       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-126557_4ca4523e-a559-4bb3-86de-d3150f6af0e2!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-126557 -n addons-126557
helpers_test.go:261: (dbg) Run:  kubectl --context addons-126557 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (37.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 image load --daemon gcr.io/google-containers/addon-resizer:functional-805196 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-805196 image load --daemon gcr.io/google-containers/addon-resizer:functional-805196 --alsologtostderr: (4.151696157s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-805196" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 image load --daemon gcr.io/google-containers/addon-resizer:functional-805196 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-805196 image load --daemon gcr.io/google-containers/addon-resizer:functional-805196 --alsologtostderr: (3.210367681s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-805196" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.337699603s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-805196
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 image load --daemon gcr.io/google-containers/addon-resizer:functional-805196 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-805196 image load --daemon gcr.io/google-containers/addon-resizer:functional-805196 --alsologtostderr: (3.147277611s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-805196" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 image save gcr.io/google-containers/addon-resizer:functional-805196 /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:385: expected "/home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:410: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I0401 10:36:36.627959  479366 out.go:291] Setting OutFile to fd 1 ...
	I0401 10:36:36.628116  479366 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 10:36:36.628127  479366 out.go:304] Setting ErrFile to fd 2...
	I0401 10:36:36.628133  479366 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 10:36:36.628379  479366 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18551-440344/.minikube/bin
	I0401 10:36:36.629084  479366 config.go:182] Loaded profile config "functional-805196": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0401 10:36:36.629210  479366 config.go:182] Loaded profile config "functional-805196": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0401 10:36:36.629695  479366 cli_runner.go:164] Run: docker container inspect functional-805196 --format={{.State.Status}}
	I0401 10:36:36.645173  479366 ssh_runner.go:195] Run: systemctl --version
	I0401 10:36:36.645316  479366 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-805196
	I0401 10:36:36.661045  479366 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/18551-440344/.minikube/machines/functional-805196/id_rsa Username:docker}
	I0401 10:36:36.753631  479366 cache_images.go:286] Loading image from: /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar
	W0401 10:36:36.753744  479366 cache_images.go:254] Failed to load cached images for profile functional-805196. make sure the profile is running. loading images: stat /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar: no such file or directory
	I0401 10:36:36.753790  479366 cache_images.go:262] succeeded pushing to: 
	I0401 10:36:36.753803  479366 cache_images.go:263] failed pushing to: functional-805196

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (383.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-869040 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0401 11:14:25.741696  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/addons-126557/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-869040 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 102 (6m19.595995059s)

                                                
                                                
-- stdout --
	* [old-k8s-version-869040] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18551
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18551-440344/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18551-440344/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-869040" primary control-plane node in "old-k8s-version-869040" cluster
	* Pulling base image v0.0.43-1711559786-18485 ...
	* Restarting existing docker container for "old-k8s-version-869040" ...
	* Preparing Kubernetes v1.20.0 on containerd 1.6.28 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-869040 addons enable metrics-server
	
	* Enabled addons: storage-provisioner, dashboard, metrics-server, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0401 11:14:02.865011  643361 out.go:291] Setting OutFile to fd 1 ...
	I0401 11:14:02.865731  643361 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 11:14:02.865742  643361 out.go:304] Setting ErrFile to fd 2...
	I0401 11:14:02.865748  643361 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 11:14:02.866014  643361 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18551-440344/.minikube/bin
	I0401 11:14:02.866412  643361 out.go:298] Setting JSON to false
	I0401 11:14:02.867461  643361 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":10593,"bootTime":1711959450,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1056-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0401 11:14:02.867572  643361 start.go:139] virtualization:  
	I0401 11:14:02.871602  643361 out.go:177] * [old-k8s-version-869040] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	I0401 11:14:02.873989  643361 out.go:177]   - MINIKUBE_LOCATION=18551
	I0401 11:14:02.874034  643361 notify.go:220] Checking for updates...
	I0401 11:14:02.878616  643361 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 11:14:02.881226  643361 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18551-440344/kubeconfig
	I0401 11:14:02.883557  643361 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18551-440344/.minikube
	I0401 11:14:02.885372  643361 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0401 11:14:02.887896  643361 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 11:14:02.890316  643361 config.go:182] Loaded profile config "old-k8s-version-869040": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0401 11:14:02.892589  643361 out.go:177] * Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	I0401 11:14:02.894918  643361 driver.go:392] Setting default libvirt URI to qemu:///system
	I0401 11:14:02.921782  643361 docker.go:122] docker version: linux-26.0.0:Docker Engine - Community
	I0401 11:14:02.921914  643361 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 11:14:03.055635  643361 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:52 SystemTime:2024-04-01 11:14:03.044625759 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215101440 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0401 11:14:03.055761  643361 docker.go:295] overlay module found
	I0401 11:14:03.060732  643361 out.go:177] * Using the docker driver based on existing profile
	I0401 11:14:03.063178  643361 start.go:297] selected driver: docker
	I0401 11:14:03.063206  643361 start.go:901] validating driver "docker" against &{Name:old-k8s-version-869040 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-869040 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 11:14:03.063367  643361 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 11:14:03.064031  643361 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 11:14:03.171750  643361 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:52 SystemTime:2024-04-01 11:14:03.159057195 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215101440 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0401 11:14:03.172746  643361 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 11:14:03.172824  643361 cni.go:84] Creating CNI manager for ""
	I0401 11:14:03.172842  643361 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0401 11:14:03.172903  643361 start.go:340] cluster config:
	{Name:old-k8s-version-869040 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-869040 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 11:14:03.176788  643361 out.go:177] * Starting "old-k8s-version-869040" primary control-plane node in "old-k8s-version-869040" cluster
	I0401 11:14:03.178697  643361 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0401 11:14:03.180724  643361 out.go:177] * Pulling base image v0.0.43-1711559786-18485 ...
	I0401 11:14:03.182690  643361 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0401 11:14:03.182754  643361 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18551-440344/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0401 11:14:03.182771  643361 cache.go:56] Caching tarball of preloaded images
	I0401 11:14:03.182867  643361 preload.go:173] Found /home/jenkins/minikube-integration/18551-440344/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0401 11:14:03.182884  643361 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0401 11:14:03.183002  643361 profile.go:143] Saving config to /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/old-k8s-version-869040/config.json ...
	I0401 11:14:03.183244  643361 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 in local docker daemon
	I0401 11:14:03.199944  643361 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 in local docker daemon, skipping pull
	I0401 11:14:03.199975  643361 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 exists in daemon, skipping load
	I0401 11:14:03.199994  643361 cache.go:194] Successfully downloaded all kic artifacts
	I0401 11:14:03.200032  643361 start.go:360] acquireMachinesLock for old-k8s-version-869040: {Name:mkfabc548348d9c7c8a2f0a68caaf4e236b55802 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 11:14:03.200105  643361 start.go:364] duration metric: took 41.607µs to acquireMachinesLock for "old-k8s-version-869040"
	I0401 11:14:03.200128  643361 start.go:96] Skipping create...Using existing machine configuration
	I0401 11:14:03.200140  643361 fix.go:54] fixHost starting: 
	I0401 11:14:03.200418  643361 cli_runner.go:164] Run: docker container inspect old-k8s-version-869040 --format={{.State.Status}}
	I0401 11:14:03.222293  643361 fix.go:112] recreateIfNeeded on old-k8s-version-869040: state=Stopped err=<nil>
	W0401 11:14:03.222335  643361 fix.go:138] unexpected machine state, will restart: <nil>
	I0401 11:14:03.224646  643361 out.go:177] * Restarting existing docker container for "old-k8s-version-869040" ...
	I0401 11:14:03.226707  643361 cli_runner.go:164] Run: docker start old-k8s-version-869040
	I0401 11:14:03.664289  643361 cli_runner.go:164] Run: docker container inspect old-k8s-version-869040 --format={{.State.Status}}
	I0401 11:14:03.682815  643361 kic.go:430] container "old-k8s-version-869040" state is running.
	I0401 11:14:03.683430  643361 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-869040
	I0401 11:14:03.702755  643361 profile.go:143] Saving config to /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/old-k8s-version-869040/config.json ...
	I0401 11:14:03.702978  643361 machine.go:94] provisionDockerMachine start ...
	I0401 11:14:03.703038  643361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-869040
	I0401 11:14:03.740925  643361 main.go:141] libmachine: Using SSH client type: native
	I0401 11:14:03.741299  643361 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 33466 <nil> <nil>}
	I0401 11:14:03.741313  643361 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 11:14:03.742684  643361 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49970->127.0.0.1:33466: read: connection reset by peer
	I0401 11:14:06.880575  643361 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-869040
	
	I0401 11:14:06.880601  643361 ubuntu.go:169] provisioning hostname "old-k8s-version-869040"
	I0401 11:14:06.880671  643361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-869040
	I0401 11:14:06.897718  643361 main.go:141] libmachine: Using SSH client type: native
	I0401 11:14:06.897963  643361 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 33466 <nil> <nil>}
	I0401 11:14:06.897979  643361 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-869040 && echo "old-k8s-version-869040" | sudo tee /etc/hostname
	I0401 11:14:07.049663  643361 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-869040
	
	I0401 11:14:07.049741  643361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-869040
	I0401 11:14:07.066237  643361 main.go:141] libmachine: Using SSH client type: native
	I0401 11:14:07.066489  643361 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 33466 <nil> <nil>}
	I0401 11:14:07.066516  643361 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-869040' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-869040/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-869040' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 11:14:07.205217  643361 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 11:14:07.205247  643361 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18551-440344/.minikube CaCertPath:/home/jenkins/minikube-integration/18551-440344/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18551-440344/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18551-440344/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18551-440344/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18551-440344/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18551-440344/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18551-440344/.minikube}
	I0401 11:14:07.205276  643361 ubuntu.go:177] setting up certificates
	I0401 11:14:07.205285  643361 provision.go:84] configureAuth start
	I0401 11:14:07.205356  643361 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-869040
	I0401 11:14:07.222755  643361 provision.go:143] copyHostCerts
	I0401 11:14:07.222824  643361 exec_runner.go:144] found /home/jenkins/minikube-integration/18551-440344/.minikube/ca.pem, removing ...
	I0401 11:14:07.222844  643361 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18551-440344/.minikube/ca.pem
	I0401 11:14:07.222922  643361 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18551-440344/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18551-440344/.minikube/ca.pem (1078 bytes)
	I0401 11:14:07.223029  643361 exec_runner.go:144] found /home/jenkins/minikube-integration/18551-440344/.minikube/cert.pem, removing ...
	I0401 11:14:07.223041  643361 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18551-440344/.minikube/cert.pem
	I0401 11:14:07.223070  643361 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18551-440344/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18551-440344/.minikube/cert.pem (1123 bytes)
	I0401 11:14:07.223131  643361 exec_runner.go:144] found /home/jenkins/minikube-integration/18551-440344/.minikube/key.pem, removing ...
	I0401 11:14:07.223140  643361 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18551-440344/.minikube/key.pem
	I0401 11:14:07.223168  643361 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18551-440344/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18551-440344/.minikube/key.pem (1679 bytes)
	I0401 11:14:07.223227  643361 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18551-440344/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18551-440344/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18551-440344/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-869040 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-869040]
	I0401 11:14:08.216087  643361 provision.go:177] copyRemoteCerts
	I0401 11:14:08.216156  643361 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 11:14:08.216201  643361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-869040
	I0401 11:14:08.234967  643361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33466 SSHKeyPath:/home/jenkins/minikube-integration/18551-440344/.minikube/machines/old-k8s-version-869040/id_rsa Username:docker}
	I0401 11:14:08.338568  643361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18551-440344/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0401 11:14:08.364644  643361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18551-440344/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0401 11:14:08.389902  643361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18551-440344/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0401 11:14:08.415547  643361 provision.go:87] duration metric: took 1.210247326s to configureAuth
	I0401 11:14:08.415598  643361 ubuntu.go:193] setting minikube options for container-runtime
	I0401 11:14:08.415783  643361 config.go:182] Loaded profile config "old-k8s-version-869040": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0401 11:14:08.415790  643361 machine.go:97] duration metric: took 4.712804701s to provisionDockerMachine
	I0401 11:14:08.415798  643361 start.go:293] postStartSetup for "old-k8s-version-869040" (driver="docker")
	I0401 11:14:08.415809  643361 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 11:14:08.415855  643361 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 11:14:08.415892  643361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-869040
	I0401 11:14:08.431955  643361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33466 SSHKeyPath:/home/jenkins/minikube-integration/18551-440344/.minikube/machines/old-k8s-version-869040/id_rsa Username:docker}
	I0401 11:14:08.530576  643361 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 11:14:08.533930  643361 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0401 11:14:08.533967  643361 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0401 11:14:08.533978  643361 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0401 11:14:08.533986  643361 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0401 11:14:08.533997  643361 filesync.go:126] Scanning /home/jenkins/minikube-integration/18551-440344/.minikube/addons for local assets ...
	I0401 11:14:08.534056  643361 filesync.go:126] Scanning /home/jenkins/minikube-integration/18551-440344/.minikube/files for local assets ...
	I0401 11:14:08.534145  643361 filesync.go:149] local asset: /home/jenkins/minikube-integration/18551-440344/.minikube/files/etc/ssl/certs/4457542.pem -> 4457542.pem in /etc/ssl/certs
	I0401 11:14:08.534255  643361 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 11:14:08.542996  643361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18551-440344/.minikube/files/etc/ssl/certs/4457542.pem --> /etc/ssl/certs/4457542.pem (1708 bytes)
	I0401 11:14:08.567826  643361 start.go:296] duration metric: took 152.011249ms for postStartSetup
	I0401 11:14:08.567922  643361 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 11:14:08.567968  643361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-869040
	I0401 11:14:08.588152  643361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33466 SSHKeyPath:/home/jenkins/minikube-integration/18551-440344/.minikube/machines/old-k8s-version-869040/id_rsa Username:docker}
	I0401 11:14:08.682008  643361 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0401 11:14:08.686519  643361 fix.go:56] duration metric: took 5.486371569s for fixHost
	I0401 11:14:08.686543  643361 start.go:83] releasing machines lock for "old-k8s-version-869040", held for 5.486426674s
	I0401 11:14:08.686619  643361 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-869040
	I0401 11:14:08.703688  643361 ssh_runner.go:195] Run: cat /version.json
	I0401 11:14:08.703754  643361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-869040
	I0401 11:14:08.704015  643361 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 11:14:08.704071  643361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-869040
	I0401 11:14:08.724219  643361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33466 SSHKeyPath:/home/jenkins/minikube-integration/18551-440344/.minikube/machines/old-k8s-version-869040/id_rsa Username:docker}
	I0401 11:14:08.727453  643361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33466 SSHKeyPath:/home/jenkins/minikube-integration/18551-440344/.minikube/machines/old-k8s-version-869040/id_rsa Username:docker}
	I0401 11:14:08.927494  643361 ssh_runner.go:195] Run: systemctl --version
	I0401 11:14:08.931966  643361 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0401 11:14:08.936410  643361 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0401 11:14:08.954884  643361 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0401 11:14:08.954968  643361 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 11:14:08.964021  643361 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0401 11:14:08.964046  643361 start.go:494] detecting cgroup driver to use...
	I0401 11:14:08.964079  643361 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0401 11:14:08.964126  643361 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0401 11:14:08.978296  643361 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0401 11:14:08.990055  643361 docker.go:217] disabling cri-docker service (if available) ...
	I0401 11:14:08.990133  643361 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 11:14:09.010146  643361 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 11:14:09.023099  643361 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 11:14:09.120048  643361 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 11:14:09.209880  643361 docker.go:233] disabling docker service ...
	I0401 11:14:09.210021  643361 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 11:14:09.223309  643361 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 11:14:09.234962  643361 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 11:14:09.339647  643361 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 11:14:09.424067  643361 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 11:14:09.437850  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 11:14:09.455415  643361 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0401 11:14:09.466082  643361 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0401 11:14:09.476411  643361 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0401 11:14:09.476490  643361 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0401 11:14:09.487575  643361 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0401 11:14:09.497627  643361 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0401 11:14:09.507498  643361 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0401 11:14:09.517854  643361 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 11:14:09.527401  643361 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0401 11:14:09.537920  643361 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 11:14:09.546775  643361 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 11:14:09.555384  643361 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 11:14:09.644961  643361 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0401 11:14:09.801496  643361 start.go:541] Will wait 60s for socket path /run/containerd/containerd.sock
	I0401 11:14:09.801567  643361 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0401 11:14:09.806088  643361 start.go:562] Will wait 60s for crictl version
	I0401 11:14:09.806164  643361 ssh_runner.go:195] Run: which crictl
	I0401 11:14:09.809633  643361 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 11:14:09.847299  643361 start.go:578] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.28
	RuntimeApiVersion:  v1
	I0401 11:14:09.847374  643361 ssh_runner.go:195] Run: containerd --version
	I0401 11:14:09.868603  643361 ssh_runner.go:195] Run: containerd --version
	I0401 11:14:09.893682  643361 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.6.28 ...
	I0401 11:14:09.895666  643361 cli_runner.go:164] Run: docker network inspect old-k8s-version-869040 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 11:14:09.908849  643361 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0401 11:14:09.912443  643361 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 11:14:09.922756  643361 kubeadm.go:877] updating cluster {Name:old-k8s-version-869040 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-869040 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 11:14:09.922891  643361 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0401 11:14:09.922955  643361 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 11:14:09.961941  643361 containerd.go:627] all images are preloaded for containerd runtime.
	I0401 11:14:09.961965  643361 containerd.go:534] Images already preloaded, skipping extraction
	I0401 11:14:09.962023  643361 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 11:14:09.997236  643361 containerd.go:627] all images are preloaded for containerd runtime.
	I0401 11:14:09.997258  643361 cache_images.go:84] Images are preloaded, skipping loading
	I0401 11:14:09.997267  643361 kubeadm.go:928] updating node { 192.168.85.2 8443 v1.20.0 containerd true true} ...
	I0401 11:14:09.997395  643361 kubeadm.go:940] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-869040 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-869040 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 11:14:09.997476  643361 ssh_runner.go:195] Run: sudo crictl info
	I0401 11:14:10.050827  643361 cni.go:84] Creating CNI manager for ""
	I0401 11:14:10.050855  643361 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0401 11:14:10.050867  643361 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 11:14:10.050925  643361 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-869040 NodeName:old-k8s-version-869040 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0401 11:14:10.051101  643361 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-869040"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 11:14:10.051202  643361 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0401 11:14:10.060811  643361 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 11:14:10.060924  643361 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 11:14:10.070761  643361 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
	I0401 11:14:10.091762  643361 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 11:14:10.113260  643361 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
	I0401 11:14:10.132911  643361 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0401 11:14:10.136629  643361 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 11:14:10.150293  643361 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 11:14:10.244682  643361 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 11:14:10.261230  643361 certs.go:68] Setting up /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/old-k8s-version-869040 for IP: 192.168.85.2
	I0401 11:14:10.261304  643361 certs.go:194] generating shared ca certs ...
	I0401 11:14:10.261347  643361 certs.go:226] acquiring lock for ca certs: {Name:mkcd78655f97da7a9cc32a54b546078a42807779 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 11:14:10.261516  643361 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18551-440344/.minikube/ca.key
	I0401 11:14:10.261606  643361 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18551-440344/.minikube/proxy-client-ca.key
	I0401 11:14:10.261640  643361 certs.go:256] generating profile certs ...
	I0401 11:14:10.261771  643361 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/old-k8s-version-869040/client.key
	I0401 11:14:10.261871  643361 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/old-k8s-version-869040/apiserver.key.d4267ff7
	I0401 11:14:10.261947  643361 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/old-k8s-version-869040/proxy-client.key
	I0401 11:14:10.262100  643361 certs.go:484] found cert: /home/jenkins/minikube-integration/18551-440344/.minikube/certs/445754.pem (1338 bytes)
	W0401 11:14:10.262157  643361 certs.go:480] ignoring /home/jenkins/minikube-integration/18551-440344/.minikube/certs/445754_empty.pem, impossibly tiny 0 bytes
	I0401 11:14:10.262183  643361 certs.go:484] found cert: /home/jenkins/minikube-integration/18551-440344/.minikube/certs/ca-key.pem (1675 bytes)
	I0401 11:14:10.262238  643361 certs.go:484] found cert: /home/jenkins/minikube-integration/18551-440344/.minikube/certs/ca.pem (1078 bytes)
	I0401 11:14:10.262289  643361 certs.go:484] found cert: /home/jenkins/minikube-integration/18551-440344/.minikube/certs/cert.pem (1123 bytes)
	I0401 11:14:10.262342  643361 certs.go:484] found cert: /home/jenkins/minikube-integration/18551-440344/.minikube/certs/key.pem (1679 bytes)
	I0401 11:14:10.262424  643361 certs.go:484] found cert: /home/jenkins/minikube-integration/18551-440344/.minikube/files/etc/ssl/certs/4457542.pem (1708 bytes)
	I0401 11:14:10.263143  643361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18551-440344/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 11:14:10.292736  643361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18551-440344/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 11:14:10.319495  643361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18551-440344/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 11:14:10.349816  643361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18551-440344/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0401 11:14:10.377683  643361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/old-k8s-version-869040/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0401 11:14:10.406030  643361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/old-k8s-version-869040/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0401 11:14:10.436936  643361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/old-k8s-version-869040/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 11:14:10.462780  643361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/old-k8s-version-869040/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0401 11:14:10.487969  643361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18551-440344/.minikube/files/etc/ssl/certs/4457542.pem --> /usr/share/ca-certificates/4457542.pem (1708 bytes)
	I0401 11:14:10.516488  643361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18551-440344/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 11:14:10.541241  643361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18551-440344/.minikube/certs/445754.pem --> /usr/share/ca-certificates/445754.pem (1338 bytes)
	I0401 11:14:10.566108  643361 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0401 11:14:10.584310  643361 ssh_runner.go:195] Run: openssl version
	I0401 11:14:10.589768  643361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4457542.pem && ln -fs /usr/share/ca-certificates/4457542.pem /etc/ssl/certs/4457542.pem"
	I0401 11:14:10.599367  643361 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4457542.pem
	I0401 11:14:10.602935  643361 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 10:33 /usr/share/ca-certificates/4457542.pem
	I0401 11:14:10.603007  643361 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4457542.pem
	I0401 11:14:10.609828  643361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4457542.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 11:14:10.618799  643361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 11:14:10.627941  643361 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 11:14:10.631625  643361 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 10:27 /usr/share/ca-certificates/minikubeCA.pem
	I0401 11:14:10.631700  643361 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 11:14:10.638489  643361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 11:14:10.648505  643361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/445754.pem && ln -fs /usr/share/ca-certificates/445754.pem /etc/ssl/certs/445754.pem"
	I0401 11:14:10.658091  643361 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/445754.pem
	I0401 11:14:10.661786  643361 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 10:33 /usr/share/ca-certificates/445754.pem
	I0401 11:14:10.661849  643361 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/445754.pem
	I0401 11:14:10.669032  643361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/445754.pem /etc/ssl/certs/51391683.0"
	I0401 11:14:10.677976  643361 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 11:14:10.681615  643361 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0401 11:14:10.689555  643361 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0401 11:14:10.696990  643361 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0401 11:14:10.704115  643361 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0401 11:14:10.711032  643361 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0401 11:14:10.718323  643361 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0401 11:14:10.725149  643361 kubeadm.go:391] StartCluster: {Name:old-k8s-version-869040 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-869040 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 11:14:10.725251  643361 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0401 11:14:10.725310  643361 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 11:14:10.763947  643361 cri.go:89] found id: "8fb4a48cf27c6725c90da61570872834c03f4dc1865197b3f1426f0f827f894a"
	I0401 11:14:10.763989  643361 cri.go:89] found id: "27fe2520ebb746e49d07fcf1ad334f5c492ba78c197570a874201267d3ffaa3f"
	I0401 11:14:10.764012  643361 cri.go:89] found id: "000258366df555fd593e48dc6fd6719819779883aa429d40cd8ed39751170282"
	I0401 11:14:10.764022  643361 cri.go:89] found id: "f9d360e936a135d336c2d41db86ccc367112509974d2c56fcd3fa3d7fb335b9a"
	I0401 11:14:10.764026  643361 cri.go:89] found id: "723bd4c79f413d52322777fc3b785237740dffca6ca180349b1c3d3f0909a986"
	I0401 11:14:10.764035  643361 cri.go:89] found id: "bc5d280a36c75d595a5c1cd4fe8b305b0c4080ad9491bcbc2be95f95859bde09"
	I0401 11:14:10.764042  643361 cri.go:89] found id: "e0e8f6e0a25eea42eff804ff1ba93241ee0066db840a8f6bbb42e7b3c2a34680"
	I0401 11:14:10.764066  643361 cri.go:89] found id: "e927ec7e31fb177888402d84efcb61941c46fabf9ca972372e611a63a958c932"
	I0401 11:14:10.764069  643361 cri.go:89] found id: ""
	I0401 11:14:10.764119  643361 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0401 11:14:10.776299  643361 cri.go:116] JSON = null
	W0401 11:14:10.776369  643361 kubeadm.go:398] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0401 11:14:10.776480  643361 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0401 11:14:10.785695  643361 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0401 11:14:10.785753  643361 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0401 11:14:10.785773  643361 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0401 11:14:10.785850  643361 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0401 11:14:10.794160  643361 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0401 11:14:10.794755  643361 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-869040" does not appear in /home/jenkins/minikube-integration/18551-440344/kubeconfig
	I0401 11:14:10.795016  643361 kubeconfig.go:62] /home/jenkins/minikube-integration/18551-440344/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-869040" cluster setting kubeconfig missing "old-k8s-version-869040" context setting]
	I0401 11:14:10.795506  643361 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18551-440344/kubeconfig: {Name:mka3c2a4390d3645e6f38c74c25892daa576bd87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 11:14:10.796911  643361 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0401 11:14:10.805629  643361 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.85.2
	I0401 11:14:10.805711  643361 kubeadm.go:591] duration metric: took 19.903136ms to restartPrimaryControlPlane
	I0401 11:14:10.805744  643361 kubeadm.go:393] duration metric: took 80.601776ms to StartCluster
	I0401 11:14:10.805776  643361 settings.go:142] acquiring lock: {Name:mk276d29ae3bc72f373a524094e329002a16d918 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 11:14:10.805840  643361 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18551-440344/kubeconfig
	I0401 11:14:10.806750  643361 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18551-440344/kubeconfig: {Name:mka3c2a4390d3645e6f38c74c25892daa576bd87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 11:14:10.806952  643361 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0401 11:14:10.810108  643361 out.go:177] * Verifying Kubernetes components...
	I0401 11:14:10.807251  643361 config.go:182] Loaded profile config "old-k8s-version-869040": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0401 11:14:10.807275  643361 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0401 11:14:10.812287  643361 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 11:14:10.812294  643361 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-869040"
	I0401 11:14:10.812371  643361 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-869040"
	W0401 11:14:10.812403  643361 addons.go:243] addon storage-provisioner should already be in state true
	I0401 11:14:10.812436  643361 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-869040"
	I0401 11:14:10.812459  643361 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-869040"
	I0401 11:14:10.812490  643361 host.go:66] Checking if "old-k8s-version-869040" exists ...
	I0401 11:14:10.812735  643361 cli_runner.go:164] Run: docker container inspect old-k8s-version-869040 --format={{.State.Status}}
	I0401 11:14:10.813033  643361 cli_runner.go:164] Run: docker container inspect old-k8s-version-869040 --format={{.State.Status}}
	I0401 11:14:10.812381  643361 addons.go:69] Setting dashboard=true in profile "old-k8s-version-869040"
	I0401 11:14:10.817565  643361 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-869040"
	I0401 11:14:10.817594  643361 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-869040"
	W0401 11:14:10.817601  643361 addons.go:243] addon metrics-server should already be in state true
	I0401 11:14:10.817637  643361 host.go:66] Checking if "old-k8s-version-869040" exists ...
	I0401 11:14:10.818288  643361 addons.go:234] Setting addon dashboard=true in "old-k8s-version-869040"
	W0401 11:14:10.818307  643361 addons.go:243] addon dashboard should already be in state true
	I0401 11:14:10.818340  643361 host.go:66] Checking if "old-k8s-version-869040" exists ...
	I0401 11:14:10.818735  643361 cli_runner.go:164] Run: docker container inspect old-k8s-version-869040 --format={{.State.Status}}
	I0401 11:14:10.819560  643361 cli_runner.go:164] Run: docker container inspect old-k8s-version-869040 --format={{.State.Status}}
	I0401 11:14:10.861310  643361 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-869040"
	W0401 11:14:10.861338  643361 addons.go:243] addon default-storageclass should already be in state true
	I0401 11:14:10.861365  643361 host.go:66] Checking if "old-k8s-version-869040" exists ...
	I0401 11:14:10.861792  643361 cli_runner.go:164] Run: docker container inspect old-k8s-version-869040 --format={{.State.Status}}
	I0401 11:14:10.867177  643361 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 11:14:10.869025  643361 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 11:14:10.869042  643361 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 11:14:10.869351  643361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-869040
	I0401 11:14:10.887097  643361 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0401 11:14:10.890060  643361 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0401 11:14:10.892019  643361 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0401 11:14:10.890160  643361 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0401 11:14:10.893514  643361 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0401 11:14:10.893604  643361 addons.go:426] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0401 11:14:10.893617  643361 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0401 11:14:10.893678  643361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-869040
	I0401 11:14:10.893825  643361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-869040
	I0401 11:14:10.938620  643361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33466 SSHKeyPath:/home/jenkins/minikube-integration/18551-440344/.minikube/machines/old-k8s-version-869040/id_rsa Username:docker}
	I0401 11:14:10.948031  643361 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 11:14:10.948051  643361 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 11:14:10.948132  643361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-869040
	I0401 11:14:10.957556  643361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33466 SSHKeyPath:/home/jenkins/minikube-integration/18551-440344/.minikube/machines/old-k8s-version-869040/id_rsa Username:docker}
	I0401 11:14:10.961333  643361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33466 SSHKeyPath:/home/jenkins/minikube-integration/18551-440344/.minikube/machines/old-k8s-version-869040/id_rsa Username:docker}
	I0401 11:14:10.984426  643361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33466 SSHKeyPath:/home/jenkins/minikube-integration/18551-440344/.minikube/machines/old-k8s-version-869040/id_rsa Username:docker}
	I0401 11:14:11.027482  643361 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 11:14:11.050601  643361 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-869040" to be "Ready" ...
	I0401 11:14:11.116920  643361 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0401 11:14:11.116998  643361 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0401 11:14:11.142864  643361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 11:14:11.164789  643361 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0401 11:14:11.164850  643361 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0401 11:14:11.170004  643361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 11:14:11.174396  643361 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0401 11:14:11.174470  643361 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0401 11:14:11.200161  643361 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0401 11:14:11.200187  643361 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0401 11:14:11.232793  643361 addons.go:426] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0401 11:14:11.232817  643361 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0401 11:14:11.250839  643361 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 11:14:11.250864  643361 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0401 11:14:11.272692  643361 addons.go:426] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0401 11:14:11.272717  643361 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0401 11:14:11.315119  643361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 11:14:11.319805  643361 addons.go:426] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0401 11:14:11.319832  643361 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0401 11:14:11.394280  643361 addons.go:426] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0401 11:14:11.394307  643361 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	W0401 11:14:11.404152  643361 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 11:14:11.404183  643361 retry.go:31] will retry after 142.108076ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0401 11:14:11.404243  643361 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 11:14:11.404260  643361 retry.go:31] will retry after 215.488015ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 11:14:11.415839  643361 addons.go:426] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0401 11:14:11.415868  643361 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0401 11:14:11.453202  643361 addons.go:426] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0401 11:14:11.453235  643361 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0401 11:14:11.475166  643361 addons.go:426] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 11:14:11.475191  643361 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	W0401 11:14:11.479186  643361 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 11:14:11.479223  643361 retry.go:31] will retry after 321.640394ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 11:14:11.497210  643361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 11:14:11.547448  643361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0401 11:14:11.578808  643361 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 11:14:11.578890  643361 retry.go:31] will retry after 254.880829ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 11:14:11.620195  643361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0401 11:14:11.636224  643361 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 11:14:11.636260  643361 retry.go:31] will retry after 204.122141ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0401 11:14:11.712404  643361 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 11:14:11.712437  643361 retry.go:31] will retry after 266.976762ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 11:14:11.801698  643361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 11:14:11.834111  643361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 11:14:11.841570  643361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0401 11:14:11.931836  643361 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 11:14:11.931875  643361 retry.go:31] will retry after 526.832994ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 11:14:11.980348  643361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0401 11:14:11.983106  643361 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 11:14:11.983135  643361 retry.go:31] will retry after 473.047057ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0401 11:14:11.983204  643361 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 11:14:11.983221  643361 retry.go:31] will retry after 325.52825ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0401 11:14:12.062632  643361 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 11:14:12.062678  643361 retry.go:31] will retry after 694.403484ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 11:14:12.309008  643361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0401 11:14:12.385440  643361 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 11:14:12.385532  643361 retry.go:31] will retry after 812.075691ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 11:14:12.456692  643361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 11:14:12.459158  643361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0401 11:14:12.560377  643361 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 11:14:12.560468  643361 retry.go:31] will retry after 638.38441ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0401 11:14:12.567981  643361 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 11:14:12.568021  643361 retry.go:31] will retry after 515.065035ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 11:14:12.758378  643361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0401 11:14:12.831517  643361 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 11:14:12.831561  643361 retry.go:31] will retry after 1.126164782s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 11:14:13.051428  643361 node_ready.go:53] error getting node "old-k8s-version-869040": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-869040": dial tcp 192.168.85.2:8443: connect: connection refused
	I0401 11:14:13.083661  643361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0401 11:14:13.166469  643361 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 11:14:13.166506  643361 retry.go:31] will retry after 422.171189ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 11:14:13.198810  643361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 11:14:13.199016  643361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0401 11:14:13.309774  643361 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 11:14:13.309833  643361 retry.go:31] will retry after 1.049674122s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0401 11:14:13.309967  643361 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 11:14:13.309993  643361 retry.go:31] will retry after 1.085349755s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 11:14:13.589939  643361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0401 11:14:13.671817  643361 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 11:14:13.671852  643361 retry.go:31] will retry after 1.030000047s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 11:14:13.957942  643361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0401 11:14:14.032088  643361 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 11:14:14.032167  643361 retry.go:31] will retry after 1.6737348s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 11:14:14.360429  643361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 11:14:14.396028  643361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0401 11:14:14.460380  643361 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 11:14:14.460416  643361 retry.go:31] will retry after 2.706637375s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0401 11:14:14.491883  643361 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 11:14:14.491914  643361 retry.go:31] will retry after 726.413016ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 11:14:14.702693  643361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0401 11:14:14.771959  643361 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 11:14:14.772030  643361 retry.go:31] will retry after 1.155359879s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 11:14:15.052060  643361 node_ready.go:53] error getting node "old-k8s-version-869040": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-869040": dial tcp 192.168.85.2:8443: connect: connection refused
	I0401 11:14:15.219318  643361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0401 11:14:15.317013  643361 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 11:14:15.317118  643361 retry.go:31] will retry after 1.45668162s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 11:14:15.707017  643361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0401 11:14:15.782504  643361 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 11:14:15.782536  643361 retry.go:31] will retry after 1.268033675s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 11:14:15.927721  643361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0401 11:14:15.996485  643361 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 11:14:15.996519  643361 retry.go:31] will retry after 1.858818614s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 11:14:16.774491  643361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0401 11:14:16.845371  643361 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 11:14:16.845405  643361 retry.go:31] will retry after 2.82641361s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 11:14:17.051684  643361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0401 11:14:17.125766  643361 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 11:14:17.125798  643361 retry.go:31] will retry after 2.323210261s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 11:14:17.167930  643361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0401 11:14:17.244471  643361 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 11:14:17.244515  643361 retry.go:31] will retry after 2.500319063s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 11:14:17.551300  643361 node_ready.go:53] error getting node "old-k8s-version-869040": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-869040": dial tcp 192.168.85.2:8443: connect: connection refused
	I0401 11:14:17.855610  643361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0401 11:14:17.926695  643361 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 11:14:17.926725  643361 retry.go:31] will retry after 5.520638494s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 11:14:19.449753  643361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0401 11:14:19.551417  643361 node_ready.go:53] error getting node "old-k8s-version-869040": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-869040": dial tcp 192.168.85.2:8443: connect: connection refused
	W0401 11:14:19.654454  643361 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 11:14:19.654485  643361 retry.go:31] will retry after 4.784360414s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 11:14:19.672761  643361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 11:14:19.745651  643361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 11:14:23.447863  643361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 11:14:24.439094  643361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0401 11:14:27.355347  643361 node_ready.go:49] node "old-k8s-version-869040" has status "Ready":"True"
	I0401 11:14:27.355370  643361 node_ready.go:38] duration metric: took 16.304665188s for node "old-k8s-version-869040" to be "Ready" ...
	I0401 11:14:27.355379  643361 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 11:14:27.558511  643361 pod_ready.go:78] waiting up to 6m0s for pod "coredns-74ff55c5b-xnz2b" in "kube-system" namespace to be "Ready" ...
	I0401 11:14:27.588469  643361 pod_ready.go:92] pod "coredns-74ff55c5b-xnz2b" in "kube-system" namespace has status "Ready":"True"
	I0401 11:14:27.588498  643361 pod_ready.go:81] duration metric: took 29.913139ms for pod "coredns-74ff55c5b-xnz2b" in "kube-system" namespace to be "Ready" ...
	I0401 11:14:27.588511  643361 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-869040" in "kube-system" namespace to be "Ready" ...
	I0401 11:14:27.605583  643361 pod_ready.go:92] pod "etcd-old-k8s-version-869040" in "kube-system" namespace has status "Ready":"True"
	I0401 11:14:27.605614  643361 pod_ready.go:81] duration metric: took 17.094777ms for pod "etcd-old-k8s-version-869040" in "kube-system" namespace to be "Ready" ...
	I0401 11:14:27.605631  643361 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-869040" in "kube-system" namespace to be "Ready" ...
	I0401 11:14:27.636292  643361 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-869040" in "kube-system" namespace has status "Ready":"True"
	I0401 11:14:27.636319  643361 pod_ready.go:81] duration metric: took 30.679733ms for pod "kube-apiserver-old-k8s-version-869040" in "kube-system" namespace to be "Ready" ...
	I0401 11:14:27.636333  643361 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-869040" in "kube-system" namespace to be "Ready" ...
	I0401 11:14:28.484183  643361 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (8.811379201s)
	I0401 11:14:28.486080  643361 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-869040 addons enable metrics-server
	
	I0401 11:14:28.484407  643361 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.738686881s)
	I0401 11:14:28.484508  643361 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.036568398s)
	I0401 11:14:28.484537  643361 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (4.04535897s)
	I0401 11:14:28.488735  643361 addons.go:470] Verifying addon metrics-server=true in "old-k8s-version-869040"
	I0401 11:14:28.504575  643361 out.go:177] * Enabled addons: storage-provisioner, dashboard, metrics-server, default-storageclass
	I0401 11:14:28.506361  643361 addons.go:505] duration metric: took 17.699078475s for enable addons: enabled=[storage-provisioner dashboard metrics-server default-storageclass]
	I0401 11:14:29.650491  643361 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-869040" in "kube-system" namespace has status "Ready":"False"
	I0401 11:14:32.146210  643361 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-869040" in "kube-system" namespace has status "Ready":"False"
	I0401 11:14:34.146350  643361 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-869040" in "kube-system" namespace has status "Ready":"False"
	I0401 11:14:36.148948  643361 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-869040" in "kube-system" namespace has status "Ready":"False"
	I0401 11:14:38.645796  643361 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-869040" in "kube-system" namespace has status "Ready":"False"
	I0401 11:14:40.646129  643361 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-869040" in "kube-system" namespace has status "Ready":"False"
	I0401 11:14:43.146458  643361 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-869040" in "kube-system" namespace has status "Ready":"False"
	I0401 11:14:45.651034  643361 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-869040" in "kube-system" namespace has status "Ready":"False"
	I0401 11:14:48.143898  643361 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-869040" in "kube-system" namespace has status "Ready":"False"
	I0401 11:14:50.643888  643361 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-869040" in "kube-system" namespace has status "Ready":"False"
	I0401 11:14:53.143762  643361 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-869040" in "kube-system" namespace has status "Ready":"False"
	I0401 11:14:55.144080  643361 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-869040" in "kube-system" namespace has status "Ready":"False"
	I0401 11:14:57.642559  643361 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-869040" in "kube-system" namespace has status "Ready":"False"
	I0401 11:15:00.164952  643361 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-869040" in "kube-system" namespace has status "Ready":"False"
	I0401 11:15:02.642784  643361 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-869040" in "kube-system" namespace has status "Ready":"False"
	I0401 11:15:05.142438  643361 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-869040" in "kube-system" namespace has status "Ready":"False"
	I0401 11:15:07.642732  643361 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-869040" in "kube-system" namespace has status "Ready":"False"
	I0401 11:15:09.649105  643361 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-869040" in "kube-system" namespace has status "Ready":"False"
	I0401 11:15:12.143438  643361 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-869040" in "kube-system" namespace has status "Ready":"False"
	I0401 11:15:14.644848  643361 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-869040" in "kube-system" namespace has status "Ready":"False"
	I0401 11:15:16.645736  643361 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-869040" in "kube-system" namespace has status "Ready":"False"
	I0401 11:15:19.143978  643361 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-869040" in "kube-system" namespace has status "Ready":"False"
	I0401 11:15:21.144171  643361 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-869040" in "kube-system" namespace has status "Ready":"False"
	I0401 11:15:23.145901  643361 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-869040" in "kube-system" namespace has status "Ready":"False"
	I0401 11:15:25.642800  643361 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-869040" in "kube-system" namespace has status "Ready":"False"
	I0401 11:15:28.143019  643361 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-869040" in "kube-system" namespace has status "Ready":"False"
	I0401 11:15:30.143769  643361 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-869040" in "kube-system" namespace has status "Ready":"False"
	I0401 11:15:32.642643  643361 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-869040" in "kube-system" namespace has status "Ready":"False"
	I0401 11:15:35.143105  643361 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-869040" in "kube-system" namespace has status "Ready":"False"
	I0401 11:15:37.174498  643361 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-869040" in "kube-system" namespace has status "Ready":"False"
	I0401 11:15:39.645797  643361 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-869040" in "kube-system" namespace has status "Ready":"False"
	I0401 11:15:42.148728  643361 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-869040" in "kube-system" namespace has status "Ready":"False"
	I0401 11:15:44.643419  643361 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-869040" in "kube-system" namespace has status "Ready":"False"
	I0401 11:15:47.143071  643361 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-869040" in "kube-system" namespace has status "Ready":"True"
	I0401 11:15:47.143097  643361 pod_ready.go:81] duration metric: took 1m19.50675695s for pod "kube-controller-manager-old-k8s-version-869040" in "kube-system" namespace to be "Ready" ...
	I0401 11:15:47.143110  643361 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-f74rn" in "kube-system" namespace to be "Ready" ...
	I0401 11:15:47.148334  643361 pod_ready.go:92] pod "kube-proxy-f74rn" in "kube-system" namespace has status "Ready":"True"
	I0401 11:15:47.148376  643361 pod_ready.go:81] duration metric: took 5.25823ms for pod "kube-proxy-f74rn" in "kube-system" namespace to be "Ready" ...
	I0401 11:15:47.148388  643361 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-869040" in "kube-system" namespace to be "Ready" ...
	I0401 11:15:49.157372  643361 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-869040" in "kube-system" namespace has status "Ready":"False"
	I0401 11:15:51.159075  643361 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-869040" in "kube-system" namespace has status "Ready":"False"
	I0401 11:15:53.660039  643361 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-869040" in "kube-system" namespace has status "Ready":"False"
	I0401 11:15:56.157450  643361 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-869040" in "kube-system" namespace has status "Ready":"False"
	I0401 11:15:57.154243  643361 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-869040" in "kube-system" namespace has status "Ready":"True"
	I0401 11:15:57.154268  643361 pod_ready.go:81] duration metric: took 10.005872167s for pod "kube-scheduler-old-k8s-version-869040" in "kube-system" namespace to be "Ready" ...
	I0401 11:15:57.154282  643361 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace to be "Ready" ...
	I0401 11:15:59.160330  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:16:01.161400  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:16:03.660508  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:16:05.660714  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:16:07.660777  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:16:09.661792  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:16:12.161603  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:16:14.662928  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:16:17.160661  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:16:19.661700  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:16:22.161180  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:16:24.660824  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:16:26.660856  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:16:28.661649  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:16:31.161365  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:16:33.660687  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:16:35.660870  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:16:38.160393  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:16:40.163638  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:16:42.170504  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:16:44.660021  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:16:46.660113  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:16:48.660333  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:16:50.660363  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:16:52.660603  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:16:54.669217  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:16:57.161849  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:16:59.660132  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:17:01.662832  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:17:04.161760  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:17:06.162943  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:17:08.660717  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:17:11.161023  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:17:13.660627  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:17:15.661848  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:17:18.160586  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:17:20.160662  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:17:22.161110  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:17:24.161422  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:17:26.660119  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:17:28.660512  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:17:30.661362  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:17:33.161327  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:17:35.661764  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:17:38.160871  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:17:40.165717  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:17:42.165793  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:17:44.660049  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:17:46.662908  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:17:49.160639  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:17:51.160751  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:17:53.661774  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:17:56.161357  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:17:58.738719  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:18:01.160699  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:18:03.160866  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:18:05.165604  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:18:07.661492  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:18:10.160461  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:18:12.160643  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:18:14.661836  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:18:17.166406  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:18:19.659685  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:18:21.660916  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:18:24.160400  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:18:26.160643  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:18:28.660698  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:18:30.661690  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:18:33.160411  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:18:35.161564  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:18:37.660734  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:18:39.661144  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:18:42.163801  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:18:44.660821  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:18:47.160479  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:18:49.222481  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:18:51.661029  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:18:54.160806  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:18:56.161681  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:18:58.660820  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:19:00.661561  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:19:03.160882  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:19:05.161188  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:19:07.660342  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:19:09.660898  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:19:11.663284  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:19:14.161126  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:19:16.161171  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:19:18.660426  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:19:20.661837  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:19:22.669480  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:19:25.161668  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:19:27.161909  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:19:29.659897  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:19:31.661661  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:19:34.161450  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:19:36.662079  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:19:39.160350  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:19:41.163535  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:19:43.660774  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:19:46.163424  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:19:48.664433  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:19:51.160253  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:19:53.160840  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:19:55.660972  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:19:57.161615  643361 pod_ready.go:81] duration metric: took 4m0.007317961s for pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace to be "Ready" ...
	E0401 11:19:57.161641  643361 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0401 11:19:57.161650  643361 pod_ready.go:38] duration metric: took 5m29.80625984s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 11:19:57.161699  643361 api_server.go:52] waiting for apiserver process to appear ...
	I0401 11:19:57.161736  643361 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0401 11:19:57.161819  643361 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 11:19:57.202189  643361 cri.go:89] found id: "cce3647fb821cde76abbc832b46d07b1e6e1ec536df027cdf7121aaf89d84d54"
	I0401 11:19:57.202214  643361 cri.go:89] found id: "e0e8f6e0a25eea42eff804ff1ba93241ee0066db840a8f6bbb42e7b3c2a34680"
	I0401 11:19:57.202220  643361 cri.go:89] found id: ""
	I0401 11:19:57.202228  643361 logs.go:276] 2 containers: [cce3647fb821cde76abbc832b46d07b1e6e1ec536df027cdf7121aaf89d84d54 e0e8f6e0a25eea42eff804ff1ba93241ee0066db840a8f6bbb42e7b3c2a34680]
	I0401 11:19:57.202286  643361 ssh_runner.go:195] Run: which crictl
	I0401 11:19:57.206114  643361 ssh_runner.go:195] Run: which crictl
	I0401 11:19:57.209990  643361 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0401 11:19:57.210061  643361 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 11:19:57.247621  643361 cri.go:89] found id: "c0de496168287609f4df97c9320899c715145b6fa2d6529ba6a6501f46b8d0aa"
	I0401 11:19:57.247645  643361 cri.go:89] found id: "e927ec7e31fb177888402d84efcb61941c46fabf9ca972372e611a63a958c932"
	I0401 11:19:57.247650  643361 cri.go:89] found id: ""
	I0401 11:19:57.247658  643361 logs.go:276] 2 containers: [c0de496168287609f4df97c9320899c715145b6fa2d6529ba6a6501f46b8d0aa e927ec7e31fb177888402d84efcb61941c46fabf9ca972372e611a63a958c932]
	I0401 11:19:57.247721  643361 ssh_runner.go:195] Run: which crictl
	I0401 11:19:57.251932  643361 ssh_runner.go:195] Run: which crictl
	I0401 11:19:57.255504  643361 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0401 11:19:57.255575  643361 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 11:19:57.295016  643361 cri.go:89] found id: "5185cdae9a75b34ee950c9f6f1cf6ae65c67838a04ec515a67a25505b4054b46"
	I0401 11:19:57.295041  643361 cri.go:89] found id: "8fb4a48cf27c6725c90da61570872834c03f4dc1865197b3f1426f0f827f894a"
	I0401 11:19:57.295047  643361 cri.go:89] found id: ""
	I0401 11:19:57.295055  643361 logs.go:276] 2 containers: [5185cdae9a75b34ee950c9f6f1cf6ae65c67838a04ec515a67a25505b4054b46 8fb4a48cf27c6725c90da61570872834c03f4dc1865197b3f1426f0f827f894a]
	I0401 11:19:57.295113  643361 ssh_runner.go:195] Run: which crictl
	I0401 11:19:57.298679  643361 ssh_runner.go:195] Run: which crictl
	I0401 11:19:57.302122  643361 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0401 11:19:57.302219  643361 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 11:19:57.347725  643361 cri.go:89] found id: "12b9a417beed52c0d14a146ade6251330fcf90c20e8666aa518dcf7ad2f7264e"
	I0401 11:19:57.347751  643361 cri.go:89] found id: "723bd4c79f413d52322777fc3b785237740dffca6ca180349b1c3d3f0909a986"
	I0401 11:19:57.347756  643361 cri.go:89] found id: ""
	I0401 11:19:57.347764  643361 logs.go:276] 2 containers: [12b9a417beed52c0d14a146ade6251330fcf90c20e8666aa518dcf7ad2f7264e 723bd4c79f413d52322777fc3b785237740dffca6ca180349b1c3d3f0909a986]
	I0401 11:19:57.347824  643361 ssh_runner.go:195] Run: which crictl
	I0401 11:19:57.351519  643361 ssh_runner.go:195] Run: which crictl
	I0401 11:19:57.355109  643361 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0401 11:19:57.355199  643361 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 11:19:57.393337  643361 cri.go:89] found id: "c2b2b6c421e068eab7bf881b707513703035897f1d7591fb54073efa33cf4466"
	I0401 11:19:57.393359  643361 cri.go:89] found id: "f9d360e936a135d336c2d41db86ccc367112509974d2c56fcd3fa3d7fb335b9a"
	I0401 11:19:57.393364  643361 cri.go:89] found id: ""
	I0401 11:19:57.393372  643361 logs.go:276] 2 containers: [c2b2b6c421e068eab7bf881b707513703035897f1d7591fb54073efa33cf4466 f9d360e936a135d336c2d41db86ccc367112509974d2c56fcd3fa3d7fb335b9a]
	I0401 11:19:57.393452  643361 ssh_runner.go:195] Run: which crictl
	I0401 11:19:57.398385  643361 ssh_runner.go:195] Run: which crictl
	I0401 11:19:57.401787  643361 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 11:19:57.401866  643361 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 11:19:57.441491  643361 cri.go:89] found id: "db524e3e456ecefc70df2862e168f564c1c378c87591f924065c027ff5cba833"
	I0401 11:19:57.441516  643361 cri.go:89] found id: "bc5d280a36c75d595a5c1cd4fe8b305b0c4080ad9491bcbc2be95f95859bde09"
	I0401 11:19:57.441522  643361 cri.go:89] found id: ""
	I0401 11:19:57.441530  643361 logs.go:276] 2 containers: [db524e3e456ecefc70df2862e168f564c1c378c87591f924065c027ff5cba833 bc5d280a36c75d595a5c1cd4fe8b305b0c4080ad9491bcbc2be95f95859bde09]
	I0401 11:19:57.441584  643361 ssh_runner.go:195] Run: which crictl
	I0401 11:19:57.446513  643361 ssh_runner.go:195] Run: which crictl
	I0401 11:19:57.449946  643361 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0401 11:19:57.450017  643361 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 11:19:57.490437  643361 cri.go:89] found id: "49eb449739e0b955225619deb122badd906ffcfad1c39aa19b6beda5721d93ad"
	I0401 11:19:57.490458  643361 cri.go:89] found id: "000258366df555fd593e48dc6fd6719819779883aa429d40cd8ed39751170282"
	I0401 11:19:57.490463  643361 cri.go:89] found id: ""
	I0401 11:19:57.490470  643361 logs.go:276] 2 containers: [49eb449739e0b955225619deb122badd906ffcfad1c39aa19b6beda5721d93ad 000258366df555fd593e48dc6fd6719819779883aa429d40cd8ed39751170282]
	I0401 11:19:57.490526  643361 ssh_runner.go:195] Run: which crictl
	I0401 11:19:57.495230  643361 ssh_runner.go:195] Run: which crictl
	I0401 11:19:57.499184  643361 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 11:19:57.499303  643361 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 11:19:57.544025  643361 cri.go:89] found id: "408a789b3a96ce3c6f7fd9adb8c71566a9f0d114f73314b4be1662ad97d0f023"
	I0401 11:19:57.544049  643361 cri.go:89] found id: ""
	I0401 11:19:57.544057  643361 logs.go:276] 1 containers: [408a789b3a96ce3c6f7fd9adb8c71566a9f0d114f73314b4be1662ad97d0f023]
	I0401 11:19:57.544138  643361 ssh_runner.go:195] Run: which crictl
	I0401 11:19:57.547795  643361 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0401 11:19:57.547891  643361 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0401 11:19:57.587290  643361 cri.go:89] found id: "27b3a26cd6f52effeb1f6ec35ce6166315c1da769749f083642e75e517cc5ec2"
	I0401 11:19:57.587314  643361 cri.go:89] found id: "27fe2520ebb746e49d07fcf1ad334f5c492ba78c197570a874201267d3ffaa3f"
	I0401 11:19:57.587320  643361 cri.go:89] found id: ""
	I0401 11:19:57.587328  643361 logs.go:276] 2 containers: [27b3a26cd6f52effeb1f6ec35ce6166315c1da769749f083642e75e517cc5ec2 27fe2520ebb746e49d07fcf1ad334f5c492ba78c197570a874201267d3ffaa3f]
	I0401 11:19:57.587409  643361 ssh_runner.go:195] Run: which crictl
	I0401 11:19:57.591008  643361 ssh_runner.go:195] Run: which crictl
	I0401 11:19:57.594295  643361 logs.go:123] Gathering logs for kube-scheduler [723bd4c79f413d52322777fc3b785237740dffca6ca180349b1c3d3f0909a986] ...
	I0401 11:19:57.594316  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 723bd4c79f413d52322777fc3b785237740dffca6ca180349b1c3d3f0909a986"
	I0401 11:19:57.652645  643361 logs.go:123] Gathering logs for kube-controller-manager [bc5d280a36c75d595a5c1cd4fe8b305b0c4080ad9491bcbc2be95f95859bde09] ...
	I0401 11:19:57.652677  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc5d280a36c75d595a5c1cd4fe8b305b0c4080ad9491bcbc2be95f95859bde09"
	I0401 11:19:57.738761  643361 logs.go:123] Gathering logs for storage-provisioner [27b3a26cd6f52effeb1f6ec35ce6166315c1da769749f083642e75e517cc5ec2] ...
	I0401 11:19:57.738797  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27b3a26cd6f52effeb1f6ec35ce6166315c1da769749f083642e75e517cc5ec2"
	I0401 11:19:57.785705  643361 logs.go:123] Gathering logs for kube-scheduler [12b9a417beed52c0d14a146ade6251330fcf90c20e8666aa518dcf7ad2f7264e] ...
	I0401 11:19:57.785731  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 12b9a417beed52c0d14a146ade6251330fcf90c20e8666aa518dcf7ad2f7264e"
	I0401 11:19:57.829596  643361 logs.go:123] Gathering logs for kubernetes-dashboard [408a789b3a96ce3c6f7fd9adb8c71566a9f0d114f73314b4be1662ad97d0f023] ...
	I0401 11:19:57.829625  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 408a789b3a96ce3c6f7fd9adb8c71566a9f0d114f73314b4be1662ad97d0f023"
	I0401 11:19:57.876043  643361 logs.go:123] Gathering logs for storage-provisioner [27fe2520ebb746e49d07fcf1ad334f5c492ba78c197570a874201267d3ffaa3f] ...
	I0401 11:19:57.879469  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27fe2520ebb746e49d07fcf1ad334f5c492ba78c197570a874201267d3ffaa3f"
	I0401 11:19:57.918548  643361 logs.go:123] Gathering logs for containerd ...
	I0401 11:19:57.918575  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0401 11:19:57.976403  643361 logs.go:123] Gathering logs for container status ...
	I0401 11:19:57.976486  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 11:19:58.038394  643361 logs.go:123] Gathering logs for kubelet ...
	I0401 11:19:58.038564  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0401 11:19:58.093664  643361 logs.go:138] Found kubelet problem: Apr 01 11:14:27 old-k8s-version-869040 kubelet[661]: E0401 11:14:27.253107     661 reflector.go:138] object-"kube-system"/"coredns-token-l8rmx": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-l8rmx" is forbidden: User "system:node:old-k8s-version-869040" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-869040' and this object
	W0401 11:19:58.093916  643361 logs.go:138] Found kubelet problem: Apr 01 11:14:27 old-k8s-version-869040 kubelet[661]: E0401 11:14:27.253340     661 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-869040" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-869040' and this object
	W0401 11:19:58.094136  643361 logs.go:138] Found kubelet problem: Apr 01 11:14:27 old-k8s-version-869040 kubelet[661]: E0401 11:14:27.253531     661 reflector.go:138] object-"kube-system"/"kindnet-token-crnqw": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-crnqw" is forbidden: User "system:node:old-k8s-version-869040" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-869040' and this object
	W0401 11:19:58.094352  643361 logs.go:138] Found kubelet problem: Apr 01 11:14:27 old-k8s-version-869040 kubelet[661]: E0401 11:14:27.253724     661 reflector.go:138] object-"kube-system"/"kube-proxy-token-26j8b": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-26j8b" is forbidden: User "system:node:old-k8s-version-869040" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-869040' and this object
	W0401 11:19:58.094561  643361 logs.go:138] Found kubelet problem: Apr 01 11:14:27 old-k8s-version-869040 kubelet[661]: E0401 11:14:27.253919     661 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-869040" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-869040' and this object
	W0401 11:19:58.094783  643361 logs.go:138] Found kubelet problem: Apr 01 11:14:27 old-k8s-version-869040 kubelet[661]: E0401 11:14:27.254131     661 reflector.go:138] object-"kube-system"/"metrics-server-token-p8whl": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-p8whl" is forbidden: User "system:node:old-k8s-version-869040" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-869040' and this object
	W0401 11:19:58.095010  643361 logs.go:138] Found kubelet problem: Apr 01 11:14:27 old-k8s-version-869040 kubelet[661]: E0401 11:14:27.254336     661 reflector.go:138] object-"kube-system"/"storage-provisioner-token-w57c5": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-w57c5" is forbidden: User "system:node:old-k8s-version-869040" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-869040' and this object
	W0401 11:19:58.095219  643361 logs.go:138] Found kubelet problem: Apr 01 11:14:27 old-k8s-version-869040 kubelet[661]: E0401 11:14:27.254517     661 reflector.go:138] object-"default"/"default-token-ldjjx": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-ldjjx" is forbidden: User "system:node:old-k8s-version-869040" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-869040' and this object
	W0401 11:19:58.105836  643361 logs.go:138] Found kubelet problem: Apr 01 11:14:30 old-k8s-version-869040 kubelet[661]: E0401 11:14:30.412495     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0401 11:19:58.106028  643361 logs.go:138] Found kubelet problem: Apr 01 11:14:30 old-k8s-version-869040 kubelet[661]: E0401 11:14:30.876693     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:19:58.108782  643361 logs.go:138] Found kubelet problem: Apr 01 11:14:45 old-k8s-version-869040 kubelet[661]: E0401 11:14:45.796344     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0401 11:19:58.109112  643361 logs.go:138] Found kubelet problem: Apr 01 11:14:46 old-k8s-version-869040 kubelet[661]: E0401 11:14:46.235753     661 reflector.go:138] object-"kubernetes-dashboard"/"kubernetes-dashboard-token-llv5m": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kubernetes-dashboard-token-llv5m" is forbidden: User "system:node:old-k8s-version-869040" cannot list resource "secrets" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'old-k8s-version-869040' and this object
	W0401 11:19:58.111078  643361 logs.go:138] Found kubelet problem: Apr 01 11:14:57 old-k8s-version-869040 kubelet[661]: E0401 11:14:57.947477     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:19:58.111265  643361 logs.go:138] Found kubelet problem: Apr 01 11:14:58 old-k8s-version-869040 kubelet[661]: E0401 11:14:58.786263     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:19:58.111591  643361 logs.go:138] Found kubelet problem: Apr 01 11:14:58 old-k8s-version-869040 kubelet[661]: E0401 11:14:58.962984     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:19:58.112252  643361 logs.go:138] Found kubelet problem: Apr 01 11:15:02 old-k8s-version-869040 kubelet[661]: E0401 11:15:02.187397     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:19:58.115174  643361 logs.go:138] Found kubelet problem: Apr 01 11:15:13 old-k8s-version-869040 kubelet[661]: E0401 11:15:13.820810     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0401 11:19:58.115664  643361 logs.go:138] Found kubelet problem: Apr 01 11:15:15 old-k8s-version-869040 kubelet[661]: E0401 11:15:15.015969     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:19:58.115995  643361 logs.go:138] Found kubelet problem: Apr 01 11:15:22 old-k8s-version-869040 kubelet[661]: E0401 11:15:22.188017     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:19:58.116182  643361 logs.go:138] Found kubelet problem: Apr 01 11:15:24 old-k8s-version-869040 kubelet[661]: E0401 11:15:24.789529     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:19:58.116494  643361 logs.go:138] Found kubelet problem: Apr 01 11:15:37 old-k8s-version-869040 kubelet[661]: E0401 11:15:37.787553     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:19:58.116953  643361 logs.go:138] Found kubelet problem: Apr 01 11:15:38 old-k8s-version-869040 kubelet[661]: E0401 11:15:38.129930     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:19:58.117305  643361 logs.go:138] Found kubelet problem: Apr 01 11:15:42 old-k8s-version-869040 kubelet[661]: E0401 11:15:42.188410     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:19:58.117493  643361 logs.go:138] Found kubelet problem: Apr 01 11:15:49 old-k8s-version-869040 kubelet[661]: E0401 11:15:49.785437     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:19:58.117820  643361 logs.go:138] Found kubelet problem: Apr 01 11:15:56 old-k8s-version-869040 kubelet[661]: E0401 11:15:56.789835     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:19:58.120259  643361 logs.go:138] Found kubelet problem: Apr 01 11:16:01 old-k8s-version-869040 kubelet[661]: E0401 11:16:01.793557     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0401 11:19:58.120590  643361 logs.go:138] Found kubelet problem: Apr 01 11:16:11 old-k8s-version-869040 kubelet[661]: E0401 11:16:11.785144     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:19:58.120783  643361 logs.go:138] Found kubelet problem: Apr 01 11:16:14 old-k8s-version-869040 kubelet[661]: E0401 11:16:14.790457     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:19:58.121375  643361 logs.go:138] Found kubelet problem: Apr 01 11:16:25 old-k8s-version-869040 kubelet[661]: E0401 11:16:25.268985     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:19:58.121562  643361 logs.go:138] Found kubelet problem: Apr 01 11:16:29 old-k8s-version-869040 kubelet[661]: E0401 11:16:29.785550     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:19:58.121896  643361 logs.go:138] Found kubelet problem: Apr 01 11:16:32 old-k8s-version-869040 kubelet[661]: E0401 11:16:32.188113     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:19:58.122084  643361 logs.go:138] Found kubelet problem: Apr 01 11:16:40 old-k8s-version-869040 kubelet[661]: E0401 11:16:40.785613     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:19:58.122409  643361 logs.go:138] Found kubelet problem: Apr 01 11:16:43 old-k8s-version-869040 kubelet[661]: E0401 11:16:43.785076     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:19:58.122598  643361 logs.go:138] Found kubelet problem: Apr 01 11:16:53 old-k8s-version-869040 kubelet[661]: E0401 11:16:53.785492     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:19:58.122923  643361 logs.go:138] Found kubelet problem: Apr 01 11:16:56 old-k8s-version-869040 kubelet[661]: E0401 11:16:56.785610     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:19:58.123112  643361 logs.go:138] Found kubelet problem: Apr 01 11:17:06 old-k8s-version-869040 kubelet[661]: E0401 11:17:06.785564     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:19:58.123439  643361 logs.go:138] Found kubelet problem: Apr 01 11:17:09 old-k8s-version-869040 kubelet[661]: E0401 11:17:09.785023     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:19:58.123624  643361 logs.go:138] Found kubelet problem: Apr 01 11:17:20 old-k8s-version-869040 kubelet[661]: E0401 11:17:20.785617     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:19:58.123948  643361 logs.go:138] Found kubelet problem: Apr 01 11:17:24 old-k8s-version-869040 kubelet[661]: E0401 11:17:24.785143     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:19:58.126404  643361 logs.go:138] Found kubelet problem: Apr 01 11:17:34 old-k8s-version-869040 kubelet[661]: E0401 11:17:34.798457     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0401 11:19:58.126735  643361 logs.go:138] Found kubelet problem: Apr 01 11:17:35 old-k8s-version-869040 kubelet[661]: E0401 11:17:35.785081     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:19:58.127319  643361 logs.go:138] Found kubelet problem: Apr 01 11:17:47 old-k8s-version-869040 kubelet[661]: E0401 11:17:47.454111     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:19:58.127502  643361 logs.go:138] Found kubelet problem: Apr 01 11:17:48 old-k8s-version-869040 kubelet[661]: E0401 11:17:48.786169     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:19:58.127836  643361 logs.go:138] Found kubelet problem: Apr 01 11:17:52 old-k8s-version-869040 kubelet[661]: E0401 11:17:52.187364     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:19:58.128019  643361 logs.go:138] Found kubelet problem: Apr 01 11:18:02 old-k8s-version-869040 kubelet[661]: E0401 11:18:02.789625     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:19:58.128344  643361 logs.go:138] Found kubelet problem: Apr 01 11:18:03 old-k8s-version-869040 kubelet[661]: E0401 11:18:03.785107     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:19:58.128673  643361 logs.go:138] Found kubelet problem: Apr 01 11:18:15 old-k8s-version-869040 kubelet[661]: E0401 11:18:15.785146     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:19:58.128861  643361 logs.go:138] Found kubelet problem: Apr 01 11:18:16 old-k8s-version-869040 kubelet[661]: E0401 11:18:16.786414     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:19:58.129706  643361 logs.go:138] Found kubelet problem: Apr 01 11:18:27 old-k8s-version-869040 kubelet[661]: E0401 11:18:27.785261     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:19:58.129919  643361 logs.go:138] Found kubelet problem: Apr 01 11:18:31 old-k8s-version-869040 kubelet[661]: E0401 11:18:31.785780     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:19:58.130260  643361 logs.go:138] Found kubelet problem: Apr 01 11:18:40 old-k8s-version-869040 kubelet[661]: E0401 11:18:40.785417     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:19:58.130445  643361 logs.go:138] Found kubelet problem: Apr 01 11:18:43 old-k8s-version-869040 kubelet[661]: E0401 11:18:43.785421     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:19:58.130779  643361 logs.go:138] Found kubelet problem: Apr 01 11:18:53 old-k8s-version-869040 kubelet[661]: E0401 11:18:53.785086     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:19:58.130964  643361 logs.go:138] Found kubelet problem: Apr 01 11:18:54 old-k8s-version-869040 kubelet[661]: E0401 11:18:54.785659     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:19:58.131289  643361 logs.go:138] Found kubelet problem: Apr 01 11:19:04 old-k8s-version-869040 kubelet[661]: E0401 11:19:04.785200     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:19:58.131471  643361 logs.go:138] Found kubelet problem: Apr 01 11:19:05 old-k8s-version-869040 kubelet[661]: E0401 11:19:05.785514     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:19:58.131653  643361 logs.go:138] Found kubelet problem: Apr 01 11:19:16 old-k8s-version-869040 kubelet[661]: E0401 11:19:16.789686     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:19:58.131979  643361 logs.go:138] Found kubelet problem: Apr 01 11:19:17 old-k8s-version-869040 kubelet[661]: E0401 11:19:17.785382     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:19:58.132168  643361 logs.go:138] Found kubelet problem: Apr 01 11:19:28 old-k8s-version-869040 kubelet[661]: E0401 11:19:28.786172     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:19:58.132493  643361 logs.go:138] Found kubelet problem: Apr 01 11:19:31 old-k8s-version-869040 kubelet[661]: E0401 11:19:31.785264     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:19:58.132676  643361 logs.go:138] Found kubelet problem: Apr 01 11:19:43 old-k8s-version-869040 kubelet[661]: E0401 11:19:43.785550     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:19:58.133011  643361 logs.go:138] Found kubelet problem: Apr 01 11:19:46 old-k8s-version-869040 kubelet[661]: E0401 11:19:46.785335     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	I0401 11:19:58.133022  643361 logs.go:123] Gathering logs for describe nodes ...
	I0401 11:19:58.133037  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0401 11:19:58.386822  643361 logs.go:123] Gathering logs for etcd [e927ec7e31fb177888402d84efcb61941c46fabf9ca972372e611a63a958c932] ...
	I0401 11:19:58.386892  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e927ec7e31fb177888402d84efcb61941c46fabf9ca972372e611a63a958c932"
	I0401 11:19:58.463147  643361 logs.go:123] Gathering logs for coredns [5185cdae9a75b34ee950c9f6f1cf6ae65c67838a04ec515a67a25505b4054b46] ...
	I0401 11:19:58.463202  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5185cdae9a75b34ee950c9f6f1cf6ae65c67838a04ec515a67a25505b4054b46"
	I0401 11:19:58.567970  643361 logs.go:123] Gathering logs for kube-proxy [c2b2b6c421e068eab7bf881b707513703035897f1d7591fb54073efa33cf4466] ...
	I0401 11:19:58.567996  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2b2b6c421e068eab7bf881b707513703035897f1d7591fb54073efa33cf4466"
	I0401 11:19:58.621882  643361 logs.go:123] Gathering logs for kube-controller-manager [db524e3e456ecefc70df2862e168f564c1c378c87591f924065c027ff5cba833] ...
	I0401 11:19:58.621914  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db524e3e456ecefc70df2862e168f564c1c378c87591f924065c027ff5cba833"
	I0401 11:19:58.714298  643361 logs.go:123] Gathering logs for etcd [c0de496168287609f4df97c9320899c715145b6fa2d6529ba6a6501f46b8d0aa] ...
	I0401 11:19:58.714377  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0de496168287609f4df97c9320899c715145b6fa2d6529ba6a6501f46b8d0aa"
	I0401 11:19:58.808824  643361 logs.go:123] Gathering logs for coredns [8fb4a48cf27c6725c90da61570872834c03f4dc1865197b3f1426f0f827f894a] ...
	I0401 11:19:58.809000  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8fb4a48cf27c6725c90da61570872834c03f4dc1865197b3f1426f0f827f894a"
	I0401 11:19:58.914734  643361 logs.go:123] Gathering logs for kube-proxy [f9d360e936a135d336c2d41db86ccc367112509974d2c56fcd3fa3d7fb335b9a] ...
	I0401 11:19:58.914760  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9d360e936a135d336c2d41db86ccc367112509974d2c56fcd3fa3d7fb335b9a"
	I0401 11:19:59.000863  643361 logs.go:123] Gathering logs for kindnet [49eb449739e0b955225619deb122badd906ffcfad1c39aa19b6beda5721d93ad] ...
	I0401 11:19:59.000905  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49eb449739e0b955225619deb122badd906ffcfad1c39aa19b6beda5721d93ad"
	I0401 11:19:59.132649  643361 logs.go:123] Gathering logs for kindnet [000258366df555fd593e48dc6fd6719819779883aa429d40cd8ed39751170282] ...
	I0401 11:19:59.132683  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 000258366df555fd593e48dc6fd6719819779883aa429d40cd8ed39751170282"
	I0401 11:19:59.237488  643361 logs.go:123] Gathering logs for dmesg ...
	I0401 11:19:59.237526  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 11:19:59.268034  643361 logs.go:123] Gathering logs for kube-apiserver [cce3647fb821cde76abbc832b46d07b1e6e1ec536df027cdf7121aaf89d84d54] ...
	I0401 11:19:59.268113  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cce3647fb821cde76abbc832b46d07b1e6e1ec536df027cdf7121aaf89d84d54"
	I0401 11:19:59.436524  643361 logs.go:123] Gathering logs for kube-apiserver [e0e8f6e0a25eea42eff804ff1ba93241ee0066db840a8f6bbb42e7b3c2a34680] ...
	I0401 11:19:59.436572  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0e8f6e0a25eea42eff804ff1ba93241ee0066db840a8f6bbb42e7b3c2a34680"
	I0401 11:19:59.651913  643361 out.go:304] Setting ErrFile to fd 2...
	I0401 11:19:59.651956  643361 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0401 11:19:59.652040  643361 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0401 11:19:59.652055  643361 out.go:239]   Apr 01 11:19:17 old-k8s-version-869040 kubelet[661]: E0401 11:19:17.785382     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	  Apr 01 11:19:17 old-k8s-version-869040 kubelet[661]: E0401 11:19:17.785382     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:19:59.652068  643361 out.go:239]   Apr 01 11:19:28 old-k8s-version-869040 kubelet[661]: E0401 11:19:28.786172     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Apr 01 11:19:28 old-k8s-version-869040 kubelet[661]: E0401 11:19:28.786172     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:19:59.652081  643361 out.go:239]   Apr 01 11:19:31 old-k8s-version-869040 kubelet[661]: E0401 11:19:31.785264     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	  Apr 01 11:19:31 old-k8s-version-869040 kubelet[661]: E0401 11:19:31.785264     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:19:59.652096  643361 out.go:239]   Apr 01 11:19:43 old-k8s-version-869040 kubelet[661]: E0401 11:19:43.785550     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Apr 01 11:19:43 old-k8s-version-869040 kubelet[661]: E0401 11:19:43.785550     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:19:59.652120  643361 out.go:239]   Apr 01 11:19:46 old-k8s-version-869040 kubelet[661]: E0401 11:19:46.785335     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	  Apr 01 11:19:46 old-k8s-version-869040 kubelet[661]: E0401 11:19:46.785335     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	I0401 11:19:59.652127  643361 out.go:304] Setting ErrFile to fd 2...
	I0401 11:19:59.652133  643361 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 11:20:09.652958  643361 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 11:20:09.667698  643361 api_server.go:72] duration metric: took 5m58.86071471s to wait for apiserver process to appear ...
	I0401 11:20:09.667722  643361 api_server.go:88] waiting for apiserver healthz status ...
	I0401 11:20:09.667756  643361 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0401 11:20:09.667812  643361 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 11:20:09.723733  643361 cri.go:89] found id: "cce3647fb821cde76abbc832b46d07b1e6e1ec536df027cdf7121aaf89d84d54"
	I0401 11:20:09.723755  643361 cri.go:89] found id: "e0e8f6e0a25eea42eff804ff1ba93241ee0066db840a8f6bbb42e7b3c2a34680"
	I0401 11:20:09.723759  643361 cri.go:89] found id: ""
	I0401 11:20:09.723767  643361 logs.go:276] 2 containers: [cce3647fb821cde76abbc832b46d07b1e6e1ec536df027cdf7121aaf89d84d54 e0e8f6e0a25eea42eff804ff1ba93241ee0066db840a8f6bbb42e7b3c2a34680]
	I0401 11:20:09.723823  643361 ssh_runner.go:195] Run: which crictl
	I0401 11:20:09.727691  643361 ssh_runner.go:195] Run: which crictl
	I0401 11:20:09.731263  643361 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0401 11:20:09.731374  643361 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 11:20:09.816767  643361 cri.go:89] found id: "c0de496168287609f4df97c9320899c715145b6fa2d6529ba6a6501f46b8d0aa"
	I0401 11:20:09.816795  643361 cri.go:89] found id: "e927ec7e31fb177888402d84efcb61941c46fabf9ca972372e611a63a958c932"
	I0401 11:20:09.816800  643361 cri.go:89] found id: ""
	I0401 11:20:09.816807  643361 logs.go:276] 2 containers: [c0de496168287609f4df97c9320899c715145b6fa2d6529ba6a6501f46b8d0aa e927ec7e31fb177888402d84efcb61941c46fabf9ca972372e611a63a958c932]
	I0401 11:20:09.816864  643361 ssh_runner.go:195] Run: which crictl
	I0401 11:20:09.834424  643361 ssh_runner.go:195] Run: which crictl
	I0401 11:20:09.839963  643361 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0401 11:20:09.840049  643361 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 11:20:09.936538  643361 cri.go:89] found id: "5185cdae9a75b34ee950c9f6f1cf6ae65c67838a04ec515a67a25505b4054b46"
	I0401 11:20:09.936559  643361 cri.go:89] found id: "8fb4a48cf27c6725c90da61570872834c03f4dc1865197b3f1426f0f827f894a"
	I0401 11:20:09.936564  643361 cri.go:89] found id: ""
	I0401 11:20:09.936575  643361 logs.go:276] 2 containers: [5185cdae9a75b34ee950c9f6f1cf6ae65c67838a04ec515a67a25505b4054b46 8fb4a48cf27c6725c90da61570872834c03f4dc1865197b3f1426f0f827f894a]
	I0401 11:20:09.936659  643361 ssh_runner.go:195] Run: which crictl
	I0401 11:20:09.944057  643361 ssh_runner.go:195] Run: which crictl
	I0401 11:20:09.948554  643361 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0401 11:20:09.948633  643361 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 11:20:10.068613  643361 cri.go:89] found id: "12b9a417beed52c0d14a146ade6251330fcf90c20e8666aa518dcf7ad2f7264e"
	I0401 11:20:10.068633  643361 cri.go:89] found id: "723bd4c79f413d52322777fc3b785237740dffca6ca180349b1c3d3f0909a986"
	I0401 11:20:10.068637  643361 cri.go:89] found id: ""
	I0401 11:20:10.068645  643361 logs.go:276] 2 containers: [12b9a417beed52c0d14a146ade6251330fcf90c20e8666aa518dcf7ad2f7264e 723bd4c79f413d52322777fc3b785237740dffca6ca180349b1c3d3f0909a986]
	I0401 11:20:10.068702  643361 ssh_runner.go:195] Run: which crictl
	I0401 11:20:10.075148  643361 ssh_runner.go:195] Run: which crictl
	I0401 11:20:10.079249  643361 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0401 11:20:10.079380  643361 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 11:20:10.167628  643361 cri.go:89] found id: "c2b2b6c421e068eab7bf881b707513703035897f1d7591fb54073efa33cf4466"
	I0401 11:20:10.167654  643361 cri.go:89] found id: "f9d360e936a135d336c2d41db86ccc367112509974d2c56fcd3fa3d7fb335b9a"
	I0401 11:20:10.167660  643361 cri.go:89] found id: ""
	I0401 11:20:10.167672  643361 logs.go:276] 2 containers: [c2b2b6c421e068eab7bf881b707513703035897f1d7591fb54073efa33cf4466 f9d360e936a135d336c2d41db86ccc367112509974d2c56fcd3fa3d7fb335b9a]
	I0401 11:20:10.167745  643361 ssh_runner.go:195] Run: which crictl
	I0401 11:20:10.178218  643361 ssh_runner.go:195] Run: which crictl
	I0401 11:20:10.184468  643361 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 11:20:10.184670  643361 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 11:20:10.273665  643361 cri.go:89] found id: "db524e3e456ecefc70df2862e168f564c1c378c87591f924065c027ff5cba833"
	I0401 11:20:10.273738  643361 cri.go:89] found id: "bc5d280a36c75d595a5c1cd4fe8b305b0c4080ad9491bcbc2be95f95859bde09"
	I0401 11:20:10.273758  643361 cri.go:89] found id: ""
	I0401 11:20:10.273782  643361 logs.go:276] 2 containers: [db524e3e456ecefc70df2862e168f564c1c378c87591f924065c027ff5cba833 bc5d280a36c75d595a5c1cd4fe8b305b0c4080ad9491bcbc2be95f95859bde09]
	I0401 11:20:10.273870  643361 ssh_runner.go:195] Run: which crictl
	I0401 11:20:10.278105  643361 ssh_runner.go:195] Run: which crictl
	I0401 11:20:10.282581  643361 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0401 11:20:10.282701  643361 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 11:20:10.332020  643361 cri.go:89] found id: "49eb449739e0b955225619deb122badd906ffcfad1c39aa19b6beda5721d93ad"
	I0401 11:20:10.332079  643361 cri.go:89] found id: "000258366df555fd593e48dc6fd6719819779883aa429d40cd8ed39751170282"
	I0401 11:20:10.332105  643361 cri.go:89] found id: ""
	I0401 11:20:10.332125  643361 logs.go:276] 2 containers: [49eb449739e0b955225619deb122badd906ffcfad1c39aa19b6beda5721d93ad 000258366df555fd593e48dc6fd6719819779883aa429d40cd8ed39751170282]
	I0401 11:20:10.332215  643361 ssh_runner.go:195] Run: which crictl
	I0401 11:20:10.336554  643361 ssh_runner.go:195] Run: which crictl
	I0401 11:20:10.340365  643361 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0401 11:20:10.340501  643361 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0401 11:20:10.392328  643361 cri.go:89] found id: "27b3a26cd6f52effeb1f6ec35ce6166315c1da769749f083642e75e517cc5ec2"
	I0401 11:20:10.392399  643361 cri.go:89] found id: "27fe2520ebb746e49d07fcf1ad334f5c492ba78c197570a874201267d3ffaa3f"
	I0401 11:20:10.392431  643361 cri.go:89] found id: ""
	I0401 11:20:10.392458  643361 logs.go:276] 2 containers: [27b3a26cd6f52effeb1f6ec35ce6166315c1da769749f083642e75e517cc5ec2 27fe2520ebb746e49d07fcf1ad334f5c492ba78c197570a874201267d3ffaa3f]
	I0401 11:20:10.392548  643361 ssh_runner.go:195] Run: which crictl
	I0401 11:20:10.396743  643361 ssh_runner.go:195] Run: which crictl
	I0401 11:20:10.425218  643361 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 11:20:10.425342  643361 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 11:20:10.563915  643361 cri.go:89] found id: "408a789b3a96ce3c6f7fd9adb8c71566a9f0d114f73314b4be1662ad97d0f023"
	I0401 11:20:10.563987  643361 cri.go:89] found id: ""
	I0401 11:20:10.564023  643361 logs.go:276] 1 containers: [408a789b3a96ce3c6f7fd9adb8c71566a9f0d114f73314b4be1662ad97d0f023]
	I0401 11:20:10.564129  643361 ssh_runner.go:195] Run: which crictl
	I0401 11:20:10.568984  643361 logs.go:123] Gathering logs for kubelet ...
	I0401 11:20:10.569065  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0401 11:20:10.632968  643361 logs.go:138] Found kubelet problem: Apr 01 11:14:27 old-k8s-version-869040 kubelet[661]: E0401 11:14:27.253107     661 reflector.go:138] object-"kube-system"/"coredns-token-l8rmx": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-l8rmx" is forbidden: User "system:node:old-k8s-version-869040" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-869040' and this object
	W0401 11:20:10.633289  643361 logs.go:138] Found kubelet problem: Apr 01 11:14:27 old-k8s-version-869040 kubelet[661]: E0401 11:14:27.253340     661 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-869040" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-869040' and this object
	W0401 11:20:10.633582  643361 logs.go:138] Found kubelet problem: Apr 01 11:14:27 old-k8s-version-869040 kubelet[661]: E0401 11:14:27.253531     661 reflector.go:138] object-"kube-system"/"kindnet-token-crnqw": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-crnqw" is forbidden: User "system:node:old-k8s-version-869040" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-869040' and this object
	W0401 11:20:10.633848  643361 logs.go:138] Found kubelet problem: Apr 01 11:14:27 old-k8s-version-869040 kubelet[661]: E0401 11:14:27.253724     661 reflector.go:138] object-"kube-system"/"kube-proxy-token-26j8b": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-26j8b" is forbidden: User "system:node:old-k8s-version-869040" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-869040' and this object
	W0401 11:20:10.634086  643361 logs.go:138] Found kubelet problem: Apr 01 11:14:27 old-k8s-version-869040 kubelet[661]: E0401 11:14:27.253919     661 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-869040" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-869040' and this object
	W0401 11:20:10.634349  643361 logs.go:138] Found kubelet problem: Apr 01 11:14:27 old-k8s-version-869040 kubelet[661]: E0401 11:14:27.254131     661 reflector.go:138] object-"kube-system"/"metrics-server-token-p8whl": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-p8whl" is forbidden: User "system:node:old-k8s-version-869040" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-869040' and this object
	W0401 11:20:10.634604  643361 logs.go:138] Found kubelet problem: Apr 01 11:14:27 old-k8s-version-869040 kubelet[661]: E0401 11:14:27.254336     661 reflector.go:138] object-"kube-system"/"storage-provisioner-token-w57c5": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-w57c5" is forbidden: User "system:node:old-k8s-version-869040" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-869040' and this object
	W0401 11:20:10.634837  643361 logs.go:138] Found kubelet problem: Apr 01 11:14:27 old-k8s-version-869040 kubelet[661]: E0401 11:14:27.254517     661 reflector.go:138] object-"default"/"default-token-ldjjx": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-ldjjx" is forbidden: User "system:node:old-k8s-version-869040" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-869040' and this object
	W0401 11:20:10.648302  643361 logs.go:138] Found kubelet problem: Apr 01 11:14:30 old-k8s-version-869040 kubelet[661]: E0401 11:14:30.412495     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0401 11:20:10.648577  643361 logs.go:138] Found kubelet problem: Apr 01 11:14:30 old-k8s-version-869040 kubelet[661]: E0401 11:14:30.876693     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:20:10.651489  643361 logs.go:138] Found kubelet problem: Apr 01 11:14:45 old-k8s-version-869040 kubelet[661]: E0401 11:14:45.796344     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0401 11:20:10.653316  643361 logs.go:138] Found kubelet problem: Apr 01 11:14:46 old-k8s-version-869040 kubelet[661]: E0401 11:14:46.235753     661 reflector.go:138] object-"kubernetes-dashboard"/"kubernetes-dashboard-token-llv5m": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kubernetes-dashboard-token-llv5m" is forbidden: User "system:node:old-k8s-version-869040" cannot list resource "secrets" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'old-k8s-version-869040' and this object
	W0401 11:20:10.656663  643361 logs.go:138] Found kubelet problem: Apr 01 11:14:57 old-k8s-version-869040 kubelet[661]: E0401 11:14:57.947477     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:20:10.656917  643361 logs.go:138] Found kubelet problem: Apr 01 11:14:58 old-k8s-version-869040 kubelet[661]: E0401 11:14:58.786263     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:20:10.657294  643361 logs.go:138] Found kubelet problem: Apr 01 11:14:58 old-k8s-version-869040 kubelet[661]: E0401 11:14:58.962984     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:20:10.658036  643361 logs.go:138] Found kubelet problem: Apr 01 11:15:02 old-k8s-version-869040 kubelet[661]: E0401 11:15:02.187397     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:20:10.660978  643361 logs.go:138] Found kubelet problem: Apr 01 11:15:13 old-k8s-version-869040 kubelet[661]: E0401 11:15:13.820810     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0401 11:20:10.663666  643361 logs.go:138] Found kubelet problem: Apr 01 11:15:15 old-k8s-version-869040 kubelet[661]: E0401 11:15:15.015969     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:20:10.664050  643361 logs.go:138] Found kubelet problem: Apr 01 11:15:22 old-k8s-version-869040 kubelet[661]: E0401 11:15:22.188017     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:20:10.664258  643361 logs.go:138] Found kubelet problem: Apr 01 11:15:24 old-k8s-version-869040 kubelet[661]: E0401 11:15:24.789529     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:20:10.664608  643361 logs.go:138] Found kubelet problem: Apr 01 11:15:37 old-k8s-version-869040 kubelet[661]: E0401 11:15:37.787553     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:20:10.666743  643361 logs.go:138] Found kubelet problem: Apr 01 11:15:38 old-k8s-version-869040 kubelet[661]: E0401 11:15:38.129930     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:20:10.667108  643361 logs.go:138] Found kubelet problem: Apr 01 11:15:42 old-k8s-version-869040 kubelet[661]: E0401 11:15:42.188410     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:20:10.667339  643361 logs.go:138] Found kubelet problem: Apr 01 11:15:49 old-k8s-version-869040 kubelet[661]: E0401 11:15:49.785437     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:20:10.667696  643361 logs.go:138] Found kubelet problem: Apr 01 11:15:56 old-k8s-version-869040 kubelet[661]: E0401 11:15:56.789835     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:20:10.670211  643361 logs.go:138] Found kubelet problem: Apr 01 11:16:01 old-k8s-version-869040 kubelet[661]: E0401 11:16:01.793557     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0401 11:20:10.671060  643361 logs.go:138] Found kubelet problem: Apr 01 11:16:11 old-k8s-version-869040 kubelet[661]: E0401 11:16:11.785144     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:20:10.671302  643361 logs.go:138] Found kubelet problem: Apr 01 11:16:14 old-k8s-version-869040 kubelet[661]: E0401 11:16:14.790457     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:20:10.673260  643361 logs.go:138] Found kubelet problem: Apr 01 11:16:25 old-k8s-version-869040 kubelet[661]: E0401 11:16:25.268985     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:20:10.673481  643361 logs.go:138] Found kubelet problem: Apr 01 11:16:29 old-k8s-version-869040 kubelet[661]: E0401 11:16:29.785550     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:20:10.673834  643361 logs.go:138] Found kubelet problem: Apr 01 11:16:32 old-k8s-version-869040 kubelet[661]: E0401 11:16:32.188113     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:20:10.674042  643361 logs.go:138] Found kubelet problem: Apr 01 11:16:40 old-k8s-version-869040 kubelet[661]: E0401 11:16:40.785613     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:20:10.674407  643361 logs.go:138] Found kubelet problem: Apr 01 11:16:43 old-k8s-version-869040 kubelet[661]: E0401 11:16:43.785076     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:20:10.674617  643361 logs.go:138] Found kubelet problem: Apr 01 11:16:53 old-k8s-version-869040 kubelet[661]: E0401 11:16:53.785492     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:20:10.675386  643361 logs.go:138] Found kubelet problem: Apr 01 11:16:56 old-k8s-version-869040 kubelet[661]: E0401 11:16:56.785610     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:20:10.675602  643361 logs.go:138] Found kubelet problem: Apr 01 11:17:06 old-k8s-version-869040 kubelet[661]: E0401 11:17:06.785564     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:20:10.675965  643361 logs.go:138] Found kubelet problem: Apr 01 11:17:09 old-k8s-version-869040 kubelet[661]: E0401 11:17:09.785023     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:20:10.676172  643361 logs.go:138] Found kubelet problem: Apr 01 11:17:20 old-k8s-version-869040 kubelet[661]: E0401 11:17:20.785617     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:20:10.676588  643361 logs.go:138] Found kubelet problem: Apr 01 11:17:24 old-k8s-version-869040 kubelet[661]: E0401 11:17:24.785143     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:20:10.679138  643361 logs.go:138] Found kubelet problem: Apr 01 11:17:34 old-k8s-version-869040 kubelet[661]: E0401 11:17:34.798457     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0401 11:20:10.679510  643361 logs.go:138] Found kubelet problem: Apr 01 11:17:35 old-k8s-version-869040 kubelet[661]: E0401 11:17:35.785081     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:20:10.680185  643361 logs.go:138] Found kubelet problem: Apr 01 11:17:47 old-k8s-version-869040 kubelet[661]: E0401 11:17:47.454111     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:20:10.680466  643361 logs.go:138] Found kubelet problem: Apr 01 11:17:48 old-k8s-version-869040 kubelet[661]: E0401 11:17:48.786169     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:20:10.680827  643361 logs.go:138] Found kubelet problem: Apr 01 11:17:52 old-k8s-version-869040 kubelet[661]: E0401 11:17:52.187364     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:20:10.681036  643361 logs.go:138] Found kubelet problem: Apr 01 11:18:02 old-k8s-version-869040 kubelet[661]: E0401 11:18:02.789625     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:20:10.681419  643361 logs.go:138] Found kubelet problem: Apr 01 11:18:03 old-k8s-version-869040 kubelet[661]: E0401 11:18:03.785107     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:20:10.681777  643361 logs.go:138] Found kubelet problem: Apr 01 11:18:15 old-k8s-version-869040 kubelet[661]: E0401 11:18:15.785146     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:20:10.681984  643361 logs.go:138] Found kubelet problem: Apr 01 11:18:16 old-k8s-version-869040 kubelet[661]: E0401 11:18:16.786414     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:20:10.682336  643361 logs.go:138] Found kubelet problem: Apr 01 11:18:27 old-k8s-version-869040 kubelet[661]: E0401 11:18:27.785261     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:20:10.682542  643361 logs.go:138] Found kubelet problem: Apr 01 11:18:31 old-k8s-version-869040 kubelet[661]: E0401 11:18:31.785780     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:20:10.682894  643361 logs.go:138] Found kubelet problem: Apr 01 11:18:40 old-k8s-version-869040 kubelet[661]: E0401 11:18:40.785417     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:20:10.683101  643361 logs.go:138] Found kubelet problem: Apr 01 11:18:43 old-k8s-version-869040 kubelet[661]: E0401 11:18:43.785421     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:20:10.683452  643361 logs.go:138] Found kubelet problem: Apr 01 11:18:53 old-k8s-version-869040 kubelet[661]: E0401 11:18:53.785086     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:20:10.683660  643361 logs.go:138] Found kubelet problem: Apr 01 11:18:54 old-k8s-version-869040 kubelet[661]: E0401 11:18:54.785659     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:20:10.684013  643361 logs.go:138] Found kubelet problem: Apr 01 11:19:04 old-k8s-version-869040 kubelet[661]: E0401 11:19:04.785200     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:20:10.684228  643361 logs.go:138] Found kubelet problem: Apr 01 11:19:05 old-k8s-version-869040 kubelet[661]: E0401 11:19:05.785514     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:20:10.684441  643361 logs.go:138] Found kubelet problem: Apr 01 11:19:16 old-k8s-version-869040 kubelet[661]: E0401 11:19:16.789686     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:20:10.684859  643361 logs.go:138] Found kubelet problem: Apr 01 11:19:17 old-k8s-version-869040 kubelet[661]: E0401 11:19:17.785382     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:20:10.685076  643361 logs.go:138] Found kubelet problem: Apr 01 11:19:28 old-k8s-version-869040 kubelet[661]: E0401 11:19:28.786172     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:20:10.685440  643361 logs.go:138] Found kubelet problem: Apr 01 11:19:31 old-k8s-version-869040 kubelet[661]: E0401 11:19:31.785264     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:20:10.685649  643361 logs.go:138] Found kubelet problem: Apr 01 11:19:43 old-k8s-version-869040 kubelet[661]: E0401 11:19:43.785550     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:20:10.686001  643361 logs.go:138] Found kubelet problem: Apr 01 11:19:46 old-k8s-version-869040 kubelet[661]: E0401 11:19:46.785335     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:20:10.686206  643361 logs.go:138] Found kubelet problem: Apr 01 11:19:58 old-k8s-version-869040 kubelet[661]: E0401 11:19:58.787551     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:20:10.686560  643361 logs.go:138] Found kubelet problem: Apr 01 11:20:01 old-k8s-version-869040 kubelet[661]: E0401 11:20:01.785197     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	I0401 11:20:10.686573  643361 logs.go:123] Gathering logs for kube-apiserver [e0e8f6e0a25eea42eff804ff1ba93241ee0066db840a8f6bbb42e7b3c2a34680] ...
	I0401 11:20:10.686597  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0e8f6e0a25eea42eff804ff1ba93241ee0066db840a8f6bbb42e7b3c2a34680"
	I0401 11:20:10.790307  643361 logs.go:123] Gathering logs for kube-scheduler [723bd4c79f413d52322777fc3b785237740dffca6ca180349b1c3d3f0909a986] ...
	I0401 11:20:10.790360  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 723bd4c79f413d52322777fc3b785237740dffca6ca180349b1c3d3f0909a986"
	I0401 11:20:10.896164  643361 logs.go:123] Gathering logs for kubernetes-dashboard [408a789b3a96ce3c6f7fd9adb8c71566a9f0d114f73314b4be1662ad97d0f023] ...
	I0401 11:20:10.896204  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 408a789b3a96ce3c6f7fd9adb8c71566a9f0d114f73314b4be1662ad97d0f023"
	I0401 11:20:10.981670  643361 logs.go:123] Gathering logs for describe nodes ...
	I0401 11:20:10.981704  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0401 11:20:11.218439  643361 logs.go:123] Gathering logs for coredns [8fb4a48cf27c6725c90da61570872834c03f4dc1865197b3f1426f0f827f894a] ...
	I0401 11:20:11.218479  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8fb4a48cf27c6725c90da61570872834c03f4dc1865197b3f1426f0f827f894a"
	I0401 11:20:11.304902  643361 logs.go:123] Gathering logs for kube-proxy [f9d360e936a135d336c2d41db86ccc367112509974d2c56fcd3fa3d7fb335b9a] ...
	I0401 11:20:11.304932  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9d360e936a135d336c2d41db86ccc367112509974d2c56fcd3fa3d7fb335b9a"
	I0401 11:20:11.387568  643361 logs.go:123] Gathering logs for storage-provisioner [27fe2520ebb746e49d07fcf1ad334f5c492ba78c197570a874201267d3ffaa3f] ...
	I0401 11:20:11.387650  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27fe2520ebb746e49d07fcf1ad334f5c492ba78c197570a874201267d3ffaa3f"
	I0401 11:20:11.437572  643361 logs.go:123] Gathering logs for containerd ...
	I0401 11:20:11.437639  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0401 11:20:11.501863  643361 logs.go:123] Gathering logs for container status ...
	I0401 11:20:11.501940  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 11:20:11.608556  643361 logs.go:123] Gathering logs for etcd [c0de496168287609f4df97c9320899c715145b6fa2d6529ba6a6501f46b8d0aa] ...
	I0401 11:20:11.608587  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0de496168287609f4df97c9320899c715145b6fa2d6529ba6a6501f46b8d0aa"
	I0401 11:20:11.663339  643361 logs.go:123] Gathering logs for coredns [5185cdae9a75b34ee950c9f6f1cf6ae65c67838a04ec515a67a25505b4054b46] ...
	I0401 11:20:11.663377  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5185cdae9a75b34ee950c9f6f1cf6ae65c67838a04ec515a67a25505b4054b46"
	I0401 11:20:11.733865  643361 logs.go:123] Gathering logs for kube-scheduler [12b9a417beed52c0d14a146ade6251330fcf90c20e8666aa518dcf7ad2f7264e] ...
	I0401 11:20:11.733901  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 12b9a417beed52c0d14a146ade6251330fcf90c20e8666aa518dcf7ad2f7264e"
	I0401 11:20:11.808863  643361 logs.go:123] Gathering logs for kube-proxy [c2b2b6c421e068eab7bf881b707513703035897f1d7591fb54073efa33cf4466] ...
	I0401 11:20:11.808901  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2b2b6c421e068eab7bf881b707513703035897f1d7591fb54073efa33cf4466"
	I0401 11:20:11.860561  643361 logs.go:123] Gathering logs for storage-provisioner [27b3a26cd6f52effeb1f6ec35ce6166315c1da769749f083642e75e517cc5ec2] ...
	I0401 11:20:11.860597  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27b3a26cd6f52effeb1f6ec35ce6166315c1da769749f083642e75e517cc5ec2"
	I0401 11:20:11.911415  643361 logs.go:123] Gathering logs for dmesg ...
	I0401 11:20:11.911449  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 11:20:11.935394  643361 logs.go:123] Gathering logs for kube-apiserver [cce3647fb821cde76abbc832b46d07b1e6e1ec536df027cdf7121aaf89d84d54] ...
	I0401 11:20:11.935644  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cce3647fb821cde76abbc832b46d07b1e6e1ec536df027cdf7121aaf89d84d54"
	I0401 11:20:12.010132  643361 logs.go:123] Gathering logs for etcd [e927ec7e31fb177888402d84efcb61941c46fabf9ca972372e611a63a958c932] ...
	I0401 11:20:12.010227  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e927ec7e31fb177888402d84efcb61941c46fabf9ca972372e611a63a958c932"
	I0401 11:20:12.065906  643361 logs.go:123] Gathering logs for kube-controller-manager [db524e3e456ecefc70df2862e168f564c1c378c87591f924065c027ff5cba833] ...
	I0401 11:20:12.065991  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db524e3e456ecefc70df2862e168f564c1c378c87591f924065c027ff5cba833"
	I0401 11:20:12.159500  643361 logs.go:123] Gathering logs for kube-controller-manager [bc5d280a36c75d595a5c1cd4fe8b305b0c4080ad9491bcbc2be95f95859bde09] ...
	I0401 11:20:12.159590  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc5d280a36c75d595a5c1cd4fe8b305b0c4080ad9491bcbc2be95f95859bde09"
	I0401 11:20:12.243372  643361 logs.go:123] Gathering logs for kindnet [49eb449739e0b955225619deb122badd906ffcfad1c39aa19b6beda5721d93ad] ...
	I0401 11:20:12.243455  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49eb449739e0b955225619deb122badd906ffcfad1c39aa19b6beda5721d93ad"
	I0401 11:20:12.295763  643361 logs.go:123] Gathering logs for kindnet [000258366df555fd593e48dc6fd6719819779883aa429d40cd8ed39751170282] ...
	I0401 11:20:12.296014  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 000258366df555fd593e48dc6fd6719819779883aa429d40cd8ed39751170282"
	I0401 11:20:12.349216  643361 out.go:304] Setting ErrFile to fd 2...
	I0401 11:20:12.349287  643361 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0401 11:20:12.349362  643361 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0401 11:20:12.349406  643361 out.go:239]   Apr 01 11:19:31 old-k8s-version-869040 kubelet[661]: E0401 11:19:31.785264     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	  Apr 01 11:19:31 old-k8s-version-869040 kubelet[661]: E0401 11:19:31.785264     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:20:12.349570  643361 out.go:239]   Apr 01 11:19:43 old-k8s-version-869040 kubelet[661]: E0401 11:19:43.785550     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Apr 01 11:19:43 old-k8s-version-869040 kubelet[661]: E0401 11:19:43.785550     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:20:12.349606  643361 out.go:239]   Apr 01 11:19:46 old-k8s-version-869040 kubelet[661]: E0401 11:19:46.785335     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	  Apr 01 11:19:46 old-k8s-version-869040 kubelet[661]: E0401 11:19:46.785335     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:20:12.349649  643361 out.go:239]   Apr 01 11:19:58 old-k8s-version-869040 kubelet[661]: E0401 11:19:58.787551     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Apr 01 11:19:58 old-k8s-version-869040 kubelet[661]: E0401 11:19:58.787551     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:20:12.349688  643361 out.go:239]   Apr 01 11:20:01 old-k8s-version-869040 kubelet[661]: E0401 11:20:01.785197     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	  Apr 01 11:20:01 old-k8s-version-869040 kubelet[661]: E0401 11:20:01.785197     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	I0401 11:20:12.349723  643361 out.go:304] Setting ErrFile to fd 2...
	I0401 11:20:12.349765  643361 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 11:20:22.351102  643361 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0401 11:20:22.363403  643361 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0401 11:20:22.365970  643361 out.go:177] 
	W0401 11:20:22.367560  643361 out.go:239] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0401 11:20:22.367651  643361 out.go:239] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	* Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0401 11:20:22.367720  643361 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	* Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0401 11:20:22.367758  643361 out.go:239] * 
	* 
	W0401 11:20:22.368797  643361 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 11:20:22.371249  643361 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-869040 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-869040
helpers_test.go:235: (dbg) docker inspect old-k8s-version-869040:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bcd4cdebba89be9a022a1c9c468b5dd6ca8f99167ad7b4e757ac07cbbbfe32a2",
	        "Created": "2024-04-01T11:10:51.615173693Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 643690,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-04-01T11:14:03.652108715Z",
	            "FinishedAt": "2024-04-01T11:14:02.182331346Z"
	        },
	        "Image": "sha256:d0f05b8b802e4c4af20a90d686bad8329f07849a8fda1b1d1c1dc3f527691df0",
	        "ResolvConfPath": "/var/lib/docker/containers/bcd4cdebba89be9a022a1c9c468b5dd6ca8f99167ad7b4e757ac07cbbbfe32a2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bcd4cdebba89be9a022a1c9c468b5dd6ca8f99167ad7b4e757ac07cbbbfe32a2/hostname",
	        "HostsPath": "/var/lib/docker/containers/bcd4cdebba89be9a022a1c9c468b5dd6ca8f99167ad7b4e757ac07cbbbfe32a2/hosts",
	        "LogPath": "/var/lib/docker/containers/bcd4cdebba89be9a022a1c9c468b5dd6ca8f99167ad7b4e757ac07cbbbfe32a2/bcd4cdebba89be9a022a1c9c468b5dd6ca8f99167ad7b4e757ac07cbbbfe32a2-json.log",
	        "Name": "/old-k8s-version-869040",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-869040:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-869040",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/79418d90303ff420840a9928ad36ab42369bab114bd77632a8c29f7dd0d34cbb-init/diff:/var/lib/docker/overlay2/65e26a120eed9f31cb763816aea149af9d6db48117d016131d4955e22e308b16/diff",
	                "MergedDir": "/var/lib/docker/overlay2/79418d90303ff420840a9928ad36ab42369bab114bd77632a8c29f7dd0d34cbb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/79418d90303ff420840a9928ad36ab42369bab114bd77632a8c29f7dd0d34cbb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/79418d90303ff420840a9928ad36ab42369bab114bd77632a8c29f7dd0d34cbb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-869040",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-869040/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-869040",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-869040",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-869040",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "36df19d3c553bc674d262e83f4062b3b6b3d0feedf54825bd120c007df4286f4",
	            "SandboxKey": "/var/run/docker/netns/36df19d3c553",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33466"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33465"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33462"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33464"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33463"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-869040": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "NetworkID": "13da13ea5cdbc3b71cd28aa4a5973376699f60442ee474f42b53b9bd5e3cb7de",
	                    "EndpointID": "d9a04a4c5584206c3888c3e992dd8f2814dbc6659fe07f34f438010268d59d3a",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-869040",
	                        "bcd4cdebba89"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-869040 -n old-k8s-version-869040
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-869040 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-869040 logs -n 25: (2.531089184s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| start   | -p cert-expiration-152372                              | cert-expiration-152372       | jenkins | v1.33.0-beta.0 | 01 Apr 24 11:09 UTC | 01 Apr 24 11:10 UTC |
	|         | --memory=2048                                          |                              |         |                |                     |                     |
	|         | --cert-expiration=3m                                   |                              |         |                |                     |                     |
	|         | --driver=docker                                        |                              |         |                |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |                |                     |                     |
	| ssh     | force-systemd-env-739457                               | force-systemd-env-739457     | jenkins | v1.33.0-beta.0 | 01 Apr 24 11:10 UTC | 01 Apr 24 11:10 UTC |
	|         | ssh cat                                                |                              |         |                |                     |                     |
	|         | /etc/containerd/config.toml                            |                              |         |                |                     |                     |
	| delete  | -p force-systemd-env-739457                            | force-systemd-env-739457     | jenkins | v1.33.0-beta.0 | 01 Apr 24 11:10 UTC | 01 Apr 24 11:10 UTC |
	| start   | -p cert-options-677057                                 | cert-options-677057          | jenkins | v1.33.0-beta.0 | 01 Apr 24 11:10 UTC | 01 Apr 24 11:10 UTC |
	|         | --memory=2048                                          |                              |         |                |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                              |         |                |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                              |         |                |                     |                     |
	|         | --apiserver-names=localhost                            |                              |         |                |                     |                     |
	|         | --apiserver-names=www.google.com                       |                              |         |                |                     |                     |
	|         | --apiserver-port=8555                                  |                              |         |                |                     |                     |
	|         | --driver=docker                                        |                              |         |                |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |                |                     |                     |
	| ssh     | cert-options-677057 ssh                                | cert-options-677057          | jenkins | v1.33.0-beta.0 | 01 Apr 24 11:10 UTC | 01 Apr 24 11:10 UTC |
	|         | openssl x509 -text -noout -in                          |                              |         |                |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                              |         |                |                     |                     |
	| ssh     | -p cert-options-677057 -- sudo                         | cert-options-677057          | jenkins | v1.33.0-beta.0 | 01 Apr 24 11:10 UTC | 01 Apr 24 11:10 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |                |                     |                     |
	| delete  | -p cert-options-677057                                 | cert-options-677057          | jenkins | v1.33.0-beta.0 | 01 Apr 24 11:10 UTC | 01 Apr 24 11:10 UTC |
	| start   | -p old-k8s-version-869040                              | old-k8s-version-869040       | jenkins | v1.33.0-beta.0 | 01 Apr 24 11:10 UTC | 01 Apr 24 11:13 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --kvm-network=default                                  |                              |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |                |                     |                     |
	|         | --keep-context=false                                   |                              |         |                |                     |                     |
	|         | --driver=docker                                        |                              |         |                |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	| start   | -p cert-expiration-152372                              | cert-expiration-152372       | jenkins | v1.33.0-beta.0 | 01 Apr 24 11:13 UTC | 01 Apr 24 11:13 UTC |
	|         | --memory=2048                                          |                              |         |                |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |                |                     |                     |
	|         | --driver=docker                                        |                              |         |                |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |                |                     |                     |
	| delete  | -p cert-expiration-152372                              | cert-expiration-152372       | jenkins | v1.33.0-beta.0 | 01 Apr 24 11:13 UTC | 01 Apr 24 11:13 UTC |
	| start   | -p                                                     | default-k8s-diff-port-293463 | jenkins | v1.33.0-beta.0 | 01 Apr 24 11:13 UTC | 01 Apr 24 11:14 UTC |
	|         | default-k8s-diff-port-293463                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=docker                                        |                              |         |                |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-869040        | old-k8s-version-869040       | jenkins | v1.33.0-beta.0 | 01 Apr 24 11:13 UTC | 01 Apr 24 11:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p old-k8s-version-869040                              | old-k8s-version-869040       | jenkins | v1.33.0-beta.0 | 01 Apr 24 11:13 UTC | 01 Apr 24 11:14 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-869040             | old-k8s-version-869040       | jenkins | v1.33.0-beta.0 | 01 Apr 24 11:14 UTC | 01 Apr 24 11:14 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p old-k8s-version-869040                              | old-k8s-version-869040       | jenkins | v1.33.0-beta.0 | 01 Apr 24 11:14 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --kvm-network=default                                  |                              |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |                |                     |                     |
	|         | --keep-context=false                                   |                              |         |                |                     |                     |
	|         | --driver=docker                                        |                              |         |                |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-293463  | default-k8s-diff-port-293463 | jenkins | v1.33.0-beta.0 | 01 Apr 24 11:14 UTC | 01 Apr 24 11:14 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-293463 | jenkins | v1.33.0-beta.0 | 01 Apr 24 11:14 UTC | 01 Apr 24 11:15 UTC |
	|         | default-k8s-diff-port-293463                           |                              |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-293463       | default-k8s-diff-port-293463 | jenkins | v1.33.0-beta.0 | 01 Apr 24 11:15 UTC | 01 Apr 24 11:15 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-293463 | jenkins | v1.33.0-beta.0 | 01 Apr 24 11:15 UTC | 01 Apr 24 11:19 UTC |
	|         | default-k8s-diff-port-293463                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=docker                                        |                              |         |                |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| image   | default-k8s-diff-port-293463                           | default-k8s-diff-port-293463 | jenkins | v1.33.0-beta.0 | 01 Apr 24 11:19 UTC | 01 Apr 24 11:19 UTC |
	|         | image list --format=json                               |                              |         |                |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-293463 | jenkins | v1.33.0-beta.0 | 01 Apr 24 11:19 UTC | 01 Apr 24 11:19 UTC |
	|         | default-k8s-diff-port-293463                           |                              |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |                |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-293463 | jenkins | v1.33.0-beta.0 | 01 Apr 24 11:19 UTC | 01 Apr 24 11:19 UTC |
	|         | default-k8s-diff-port-293463                           |                              |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |                |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-293463 | jenkins | v1.33.0-beta.0 | 01 Apr 24 11:19 UTC | 01 Apr 24 11:19 UTC |
	|         | default-k8s-diff-port-293463                           |                              |         |                |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-293463 | jenkins | v1.33.0-beta.0 | 01 Apr 24 11:19 UTC | 01 Apr 24 11:19 UTC |
	|         | default-k8s-diff-port-293463                           |                              |         |                |                     |                     |
	| start   | -p embed-certs-300026                                  | embed-certs-300026           | jenkins | v1.33.0-beta.0 | 01 Apr 24 11:19 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |         |                |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/01 11:19:52
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 11:19:52.684978  653226 out.go:291] Setting OutFile to fd 1 ...
	I0401 11:19:52.685156  653226 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 11:19:52.685168  653226 out.go:304] Setting ErrFile to fd 2...
	I0401 11:19:52.685173  653226 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 11:19:52.685457  653226 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18551-440344/.minikube/bin
	I0401 11:19:52.685885  653226 out.go:298] Setting JSON to false
	I0401 11:19:52.686893  653226 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":10943,"bootTime":1711959450,"procs":237,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1056-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0401 11:19:52.686964  653226 start.go:139] virtualization:  
	I0401 11:19:52.691431  653226 out.go:177] * [embed-certs-300026] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	I0401 11:19:52.693849  653226 out.go:177]   - MINIKUBE_LOCATION=18551
	I0401 11:19:52.693967  653226 notify.go:220] Checking for updates...
	I0401 11:19:52.695931  653226 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 11:19:52.698509  653226 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18551-440344/kubeconfig
	I0401 11:19:52.700880  653226 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18551-440344/.minikube
	I0401 11:19:52.703179  653226 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0401 11:19:52.705602  653226 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 11:19:52.708314  653226 config.go:182] Loaded profile config "old-k8s-version-869040": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0401 11:19:52.708433  653226 driver.go:392] Setting default libvirt URI to qemu:///system
	I0401 11:19:52.732511  653226 docker.go:122] docker version: linux-26.0.0:Docker Engine - Community
	I0401 11:19:52.732639  653226 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 11:19:52.816769  653226 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:52 SystemTime:2024-04-01 11:19:52.804398724 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215101440 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0401 11:19:52.816885  653226 docker.go:295] overlay module found
	I0401 11:19:52.820586  653226 out.go:177] * Using the docker driver based on user configuration
	I0401 11:19:48.664433  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:19:51.160253  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:19:52.822817  653226 start.go:297] selected driver: docker
	I0401 11:19:52.822863  653226 start.go:901] validating driver "docker" against <nil>
	I0401 11:19:52.822879  653226 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 11:19:52.823589  653226 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 11:19:52.880949  653226 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:52 SystemTime:2024-04-01 11:19:52.872247229 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215101440 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0401 11:19:52.881169  653226 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0401 11:19:52.881436  653226 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 11:19:52.883656  653226 out.go:177] * Using Docker driver with root privileges
	I0401 11:19:52.885498  653226 cni.go:84] Creating CNI manager for ""
	I0401 11:19:52.885519  653226 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0401 11:19:52.885529  653226 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0401 11:19:52.885613  653226 start.go:340] cluster config:
	{Name:embed-certs-300026 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:embed-certs-300026 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 11:19:52.887850  653226 out.go:177] * Starting "embed-certs-300026" primary control-plane node in "embed-certs-300026" cluster
	I0401 11:19:52.889937  653226 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0401 11:19:52.892150  653226 out.go:177] * Pulling base image v0.0.43-1711559786-18485 ...
	I0401 11:19:52.893862  653226 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime containerd
	I0401 11:19:52.893904  653226 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 in local docker daemon
	I0401 11:19:52.893951  653226 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18551-440344/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-arm64.tar.lz4
	I0401 11:19:52.893961  653226 cache.go:56] Caching tarball of preloaded images
	I0401 11:19:52.894033  653226 preload.go:173] Found /home/jenkins/minikube-integration/18551-440344/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0401 11:19:52.894043  653226 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on containerd
	I0401 11:19:52.894183  653226 profile.go:143] Saving config to /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/embed-certs-300026/config.json ...
	I0401 11:19:52.894319  653226 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/embed-certs-300026/config.json: {Name:mk55b97c1b364dd8e79b03e34cb9f5a617a9c192 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 11:19:52.908212  653226 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 in local docker daemon, skipping pull
	I0401 11:19:52.908241  653226 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 exists in daemon, skipping load
	I0401 11:19:52.908264  653226 cache.go:194] Successfully downloaded all kic artifacts
	I0401 11:19:52.908293  653226 start.go:360] acquireMachinesLock for embed-certs-300026: {Name:mk3bbcfb3a0bf188d04e344aeb73c3daba39a093 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 11:19:52.908407  653226 start.go:364] duration metric: took 91.296µs to acquireMachinesLock for "embed-certs-300026"
	I0401 11:19:52.908439  653226 start.go:93] Provisioning new machine with config: &{Name:embed-certs-300026 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:embed-certs-300026 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0401 11:19:52.908536  653226 start.go:125] createHost starting for "" (driver="docker")
	I0401 11:19:52.911265  653226 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0401 11:19:52.911512  653226 start.go:159] libmachine.API.Create for "embed-certs-300026" (driver="docker")
	I0401 11:19:52.911545  653226 client.go:168] LocalClient.Create starting
	I0401 11:19:52.911612  653226 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18551-440344/.minikube/certs/ca.pem
	I0401 11:19:52.911692  653226 main.go:141] libmachine: Decoding PEM data...
	I0401 11:19:52.911712  653226 main.go:141] libmachine: Parsing certificate...
	I0401 11:19:52.911748  653226 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18551-440344/.minikube/certs/cert.pem
	I0401 11:19:52.911776  653226 main.go:141] libmachine: Decoding PEM data...
	I0401 11:19:52.911789  653226 main.go:141] libmachine: Parsing certificate...
	I0401 11:19:52.912154  653226 cli_runner.go:164] Run: docker network inspect embed-certs-300026 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0401 11:19:52.929113  653226 cli_runner.go:211] docker network inspect embed-certs-300026 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0401 11:19:52.929203  653226 network_create.go:281] running [docker network inspect embed-certs-300026] to gather additional debugging logs...
	I0401 11:19:52.929226  653226 cli_runner.go:164] Run: docker network inspect embed-certs-300026
	W0401 11:19:52.946372  653226 cli_runner.go:211] docker network inspect embed-certs-300026 returned with exit code 1
	I0401 11:19:52.946405  653226 network_create.go:284] error running [docker network inspect embed-certs-300026]: docker network inspect embed-certs-300026: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-300026 not found
	I0401 11:19:52.946419  653226 network_create.go:286] output of [docker network inspect embed-certs-300026]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-300026 not found
	
	** /stderr **
	I0401 11:19:52.946516  653226 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 11:19:52.964373  653226 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c5ad722d7f9c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:2a:56:52:c0} reservation:<nil>}
	I0401 11:19:52.964880  653226 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-4a59896f90af IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:dd:87:d3:c7} reservation:<nil>}
	I0401 11:19:52.965265  653226 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-193e4eac0d28 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:58:b7:a2:41} reservation:<nil>}
	I0401 11:19:52.965874  653226 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40025aba50}
	I0401 11:19:52.965923  653226 network_create.go:124] attempt to create docker network embed-certs-300026 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0401 11:19:52.965979  653226 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-300026 embed-certs-300026
	I0401 11:19:53.038461  653226 network_create.go:108] docker network embed-certs-300026 192.168.76.0/24 created
	I0401 11:19:53.038501  653226 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-300026" container
	I0401 11:19:53.038575  653226 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0401 11:19:53.053781  653226 cli_runner.go:164] Run: docker volume create embed-certs-300026 --label name.minikube.sigs.k8s.io=embed-certs-300026 --label created_by.minikube.sigs.k8s.io=true
	I0401 11:19:53.071398  653226 oci.go:103] Successfully created a docker volume embed-certs-300026
	I0401 11:19:53.071504  653226 cli_runner.go:164] Run: docker run --rm --name embed-certs-300026-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-300026 --entrypoint /usr/bin/test -v embed-certs-300026:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 -d /var/lib
	I0401 11:19:53.648856  653226 oci.go:107] Successfully prepared a docker volume embed-certs-300026
	I0401 11:19:53.648904  653226 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime containerd
	I0401 11:19:53.648925  653226 kic.go:194] Starting extracting preloaded images to volume ...
	I0401 11:19:53.649018  653226 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18551-440344/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-300026:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 -I lz4 -xf /preloaded.tar -C /extractDir
	I0401 11:19:53.160840  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:19:55.660972  643361 pod_ready.go:102] pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace has status "Ready":"False"
	I0401 11:19:57.161615  643361 pod_ready.go:81] duration metric: took 4m0.007317961s for pod "metrics-server-9975d5f86-hltl7" in "kube-system" namespace to be "Ready" ...
	E0401 11:19:57.161641  643361 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0401 11:19:57.161650  643361 pod_ready.go:38] duration metric: took 5m29.80625984s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 11:19:57.161699  643361 api_server.go:52] waiting for apiserver process to appear ...
	I0401 11:19:57.161736  643361 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0401 11:19:57.161819  643361 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 11:19:57.202189  643361 cri.go:89] found id: "cce3647fb821cde76abbc832b46d07b1e6e1ec536df027cdf7121aaf89d84d54"
	I0401 11:19:57.202214  643361 cri.go:89] found id: "e0e8f6e0a25eea42eff804ff1ba93241ee0066db840a8f6bbb42e7b3c2a34680"
	I0401 11:19:57.202220  643361 cri.go:89] found id: ""
	I0401 11:19:57.202228  643361 logs.go:276] 2 containers: [cce3647fb821cde76abbc832b46d07b1e6e1ec536df027cdf7121aaf89d84d54 e0e8f6e0a25eea42eff804ff1ba93241ee0066db840a8f6bbb42e7b3c2a34680]
	I0401 11:19:57.202286  643361 ssh_runner.go:195] Run: which crictl
	I0401 11:19:57.206114  643361 ssh_runner.go:195] Run: which crictl
	I0401 11:19:57.209990  643361 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0401 11:19:57.210061  643361 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 11:19:57.247621  643361 cri.go:89] found id: "c0de496168287609f4df97c9320899c715145b6fa2d6529ba6a6501f46b8d0aa"
	I0401 11:19:57.247645  643361 cri.go:89] found id: "e927ec7e31fb177888402d84efcb61941c46fabf9ca972372e611a63a958c932"
	I0401 11:19:57.247650  643361 cri.go:89] found id: ""
	I0401 11:19:57.247658  643361 logs.go:276] 2 containers: [c0de496168287609f4df97c9320899c715145b6fa2d6529ba6a6501f46b8d0aa e927ec7e31fb177888402d84efcb61941c46fabf9ca972372e611a63a958c932]
	I0401 11:19:57.247721  643361 ssh_runner.go:195] Run: which crictl
	I0401 11:19:57.251932  643361 ssh_runner.go:195] Run: which crictl
	I0401 11:19:57.255504  643361 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0401 11:19:57.255575  643361 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 11:19:57.295016  643361 cri.go:89] found id: "5185cdae9a75b34ee950c9f6f1cf6ae65c67838a04ec515a67a25505b4054b46"
	I0401 11:19:57.295041  643361 cri.go:89] found id: "8fb4a48cf27c6725c90da61570872834c03f4dc1865197b3f1426f0f827f894a"
	I0401 11:19:57.295047  643361 cri.go:89] found id: ""
	I0401 11:19:57.295055  643361 logs.go:276] 2 containers: [5185cdae9a75b34ee950c9f6f1cf6ae65c67838a04ec515a67a25505b4054b46 8fb4a48cf27c6725c90da61570872834c03f4dc1865197b3f1426f0f827f894a]
	I0401 11:19:57.295113  643361 ssh_runner.go:195] Run: which crictl
	I0401 11:19:57.298679  643361 ssh_runner.go:195] Run: which crictl
	I0401 11:19:57.302122  643361 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0401 11:19:57.302219  643361 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 11:19:57.347725  643361 cri.go:89] found id: "12b9a417beed52c0d14a146ade6251330fcf90c20e8666aa518dcf7ad2f7264e"
	I0401 11:19:57.347751  643361 cri.go:89] found id: "723bd4c79f413d52322777fc3b785237740dffca6ca180349b1c3d3f0909a986"
	I0401 11:19:57.347756  643361 cri.go:89] found id: ""
	I0401 11:19:57.347764  643361 logs.go:276] 2 containers: [12b9a417beed52c0d14a146ade6251330fcf90c20e8666aa518dcf7ad2f7264e 723bd4c79f413d52322777fc3b785237740dffca6ca180349b1c3d3f0909a986]
	I0401 11:19:57.347824  643361 ssh_runner.go:195] Run: which crictl
	I0401 11:19:57.351519  643361 ssh_runner.go:195] Run: which crictl
	I0401 11:19:57.355109  643361 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0401 11:19:57.355199  643361 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 11:19:57.393337  643361 cri.go:89] found id: "c2b2b6c421e068eab7bf881b707513703035897f1d7591fb54073efa33cf4466"
	I0401 11:19:57.393359  643361 cri.go:89] found id: "f9d360e936a135d336c2d41db86ccc367112509974d2c56fcd3fa3d7fb335b9a"
	I0401 11:19:57.393364  643361 cri.go:89] found id: ""
	I0401 11:19:57.393372  643361 logs.go:276] 2 containers: [c2b2b6c421e068eab7bf881b707513703035897f1d7591fb54073efa33cf4466 f9d360e936a135d336c2d41db86ccc367112509974d2c56fcd3fa3d7fb335b9a]
	I0401 11:19:57.393452  643361 ssh_runner.go:195] Run: which crictl
	I0401 11:19:57.398385  643361 ssh_runner.go:195] Run: which crictl
	I0401 11:19:57.401787  643361 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 11:19:57.401866  643361 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 11:19:57.441491  643361 cri.go:89] found id: "db524e3e456ecefc70df2862e168f564c1c378c87591f924065c027ff5cba833"
	I0401 11:19:57.441516  643361 cri.go:89] found id: "bc5d280a36c75d595a5c1cd4fe8b305b0c4080ad9491bcbc2be95f95859bde09"
	I0401 11:19:57.441522  643361 cri.go:89] found id: ""
	I0401 11:19:57.441530  643361 logs.go:276] 2 containers: [db524e3e456ecefc70df2862e168f564c1c378c87591f924065c027ff5cba833 bc5d280a36c75d595a5c1cd4fe8b305b0c4080ad9491bcbc2be95f95859bde09]
	I0401 11:19:57.441584  643361 ssh_runner.go:195] Run: which crictl
	I0401 11:19:57.446513  643361 ssh_runner.go:195] Run: which crictl
	I0401 11:19:57.449946  643361 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0401 11:19:57.450017  643361 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 11:19:57.490437  643361 cri.go:89] found id: "49eb449739e0b955225619deb122badd906ffcfad1c39aa19b6beda5721d93ad"
	I0401 11:19:57.490458  643361 cri.go:89] found id: "000258366df555fd593e48dc6fd6719819779883aa429d40cd8ed39751170282"
	I0401 11:19:57.490463  643361 cri.go:89] found id: ""
	I0401 11:19:57.490470  643361 logs.go:276] 2 containers: [49eb449739e0b955225619deb122badd906ffcfad1c39aa19b6beda5721d93ad 000258366df555fd593e48dc6fd6719819779883aa429d40cd8ed39751170282]
	I0401 11:19:57.490526  643361 ssh_runner.go:195] Run: which crictl
	I0401 11:19:57.495230  643361 ssh_runner.go:195] Run: which crictl
	I0401 11:19:57.499184  643361 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 11:19:57.499303  643361 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 11:19:57.544025  643361 cri.go:89] found id: "408a789b3a96ce3c6f7fd9adb8c71566a9f0d114f73314b4be1662ad97d0f023"
	I0401 11:19:57.544049  643361 cri.go:89] found id: ""
	I0401 11:19:57.544057  643361 logs.go:276] 1 containers: [408a789b3a96ce3c6f7fd9adb8c71566a9f0d114f73314b4be1662ad97d0f023]
	I0401 11:19:57.544138  643361 ssh_runner.go:195] Run: which crictl
	I0401 11:19:57.547795  643361 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0401 11:19:57.547891  643361 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0401 11:19:57.587290  643361 cri.go:89] found id: "27b3a26cd6f52effeb1f6ec35ce6166315c1da769749f083642e75e517cc5ec2"
	I0401 11:19:57.587314  643361 cri.go:89] found id: "27fe2520ebb746e49d07fcf1ad334f5c492ba78c197570a874201267d3ffaa3f"
	I0401 11:19:57.587320  643361 cri.go:89] found id: ""
	I0401 11:19:57.587328  643361 logs.go:276] 2 containers: [27b3a26cd6f52effeb1f6ec35ce6166315c1da769749f083642e75e517cc5ec2 27fe2520ebb746e49d07fcf1ad334f5c492ba78c197570a874201267d3ffaa3f]
	I0401 11:19:57.587409  643361 ssh_runner.go:195] Run: which crictl
	I0401 11:19:57.591008  643361 ssh_runner.go:195] Run: which crictl
	I0401 11:19:57.594295  643361 logs.go:123] Gathering logs for kube-scheduler [723bd4c79f413d52322777fc3b785237740dffca6ca180349b1c3d3f0909a986] ...
	I0401 11:19:57.594316  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 723bd4c79f413d52322777fc3b785237740dffca6ca180349b1c3d3f0909a986"
	I0401 11:19:57.652645  643361 logs.go:123] Gathering logs for kube-controller-manager [bc5d280a36c75d595a5c1cd4fe8b305b0c4080ad9491bcbc2be95f95859bde09] ...
	I0401 11:19:57.652677  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc5d280a36c75d595a5c1cd4fe8b305b0c4080ad9491bcbc2be95f95859bde09"
	I0401 11:19:57.738761  643361 logs.go:123] Gathering logs for storage-provisioner [27b3a26cd6f52effeb1f6ec35ce6166315c1da769749f083642e75e517cc5ec2] ...
	I0401 11:19:57.738797  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27b3a26cd6f52effeb1f6ec35ce6166315c1da769749f083642e75e517cc5ec2"
	I0401 11:19:57.785705  643361 logs.go:123] Gathering logs for kube-scheduler [12b9a417beed52c0d14a146ade6251330fcf90c20e8666aa518dcf7ad2f7264e] ...
	I0401 11:19:57.785731  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 12b9a417beed52c0d14a146ade6251330fcf90c20e8666aa518dcf7ad2f7264e"
	I0401 11:19:57.829596  643361 logs.go:123] Gathering logs for kubernetes-dashboard [408a789b3a96ce3c6f7fd9adb8c71566a9f0d114f73314b4be1662ad97d0f023] ...
	I0401 11:19:57.829625  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 408a789b3a96ce3c6f7fd9adb8c71566a9f0d114f73314b4be1662ad97d0f023"
	I0401 11:19:58.346336  653226 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18551-440344/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-300026:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 -I lz4 -xf /preloaded.tar -C /extractDir: (4.697258248s)
	I0401 11:19:58.346372  653226 kic.go:203] duration metric: took 4.697443753s to extract preloaded images to volume ...
	W0401 11:19:58.346554  653226 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0401 11:19:58.346678  653226 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0401 11:19:58.453407  653226 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-300026 --name embed-certs-300026 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-300026 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-300026 --network embed-certs-300026 --ip 192.168.76.2 --volume embed-certs-300026:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82
	I0401 11:19:58.846737  653226 cli_runner.go:164] Run: docker container inspect embed-certs-300026 --format={{.State.Running}}
	I0401 11:19:58.870199  653226 cli_runner.go:164] Run: docker container inspect embed-certs-300026 --format={{.State.Status}}
	I0401 11:19:58.899948  653226 cli_runner.go:164] Run: docker exec embed-certs-300026 stat /var/lib/dpkg/alternatives/iptables
	I0401 11:19:58.965771  653226 oci.go:144] the created container "embed-certs-300026" has a running status.
	I0401 11:19:58.965798  653226 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18551-440344/.minikube/machines/embed-certs-300026/id_rsa...
	I0401 11:19:59.948855  653226 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18551-440344/.minikube/machines/embed-certs-300026/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0401 11:19:59.970477  653226 cli_runner.go:164] Run: docker container inspect embed-certs-300026 --format={{.State.Status}}
	I0401 11:19:59.995365  653226 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0401 11:19:59.995386  653226 kic_runner.go:114] Args: [docker exec --privileged embed-certs-300026 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0401 11:20:00.290132  653226 cli_runner.go:164] Run: docker container inspect embed-certs-300026 --format={{.State.Status}}
	I0401 11:20:00.420394  653226 machine.go:94] provisionDockerMachine start ...
	I0401 11:20:00.420512  653226 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-300026
	I0401 11:20:00.514624  653226 main.go:141] libmachine: Using SSH client type: native
	I0401 11:20:00.514945  653226 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 33476 <nil> <nil>}
	I0401 11:20:00.514957  653226 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 11:20:00.713765  653226 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-300026
	
	I0401 11:20:00.713792  653226 ubuntu.go:169] provisioning hostname "embed-certs-300026"
	I0401 11:20:00.713864  653226 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-300026
	I0401 11:20:00.745684  653226 main.go:141] libmachine: Using SSH client type: native
	I0401 11:20:00.745952  653226 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 33476 <nil> <nil>}
	I0401 11:20:00.745973  653226 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-300026 && echo "embed-certs-300026" | sudo tee /etc/hostname
	I0401 11:20:00.919821  653226 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-300026
	
	I0401 11:20:00.919913  653226 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-300026
	I0401 11:20:00.943018  653226 main.go:141] libmachine: Using SSH client type: native
	I0401 11:20:00.943270  653226 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 33476 <nil> <nil>}
	I0401 11:20:00.943292  653226 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-300026' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-300026/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-300026' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 11:20:01.093290  653226 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 11:20:01.093383  653226 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18551-440344/.minikube CaCertPath:/home/jenkins/minikube-integration/18551-440344/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18551-440344/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18551-440344/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18551-440344/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18551-440344/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18551-440344/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18551-440344/.minikube}
	I0401 11:20:01.093445  653226 ubuntu.go:177] setting up certificates
	I0401 11:20:01.093474  653226 provision.go:84] configureAuth start
	I0401 11:20:01.093588  653226 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-300026
	I0401 11:20:01.112498  653226 provision.go:143] copyHostCerts
	I0401 11:20:01.112567  653226 exec_runner.go:144] found /home/jenkins/minikube-integration/18551-440344/.minikube/cert.pem, removing ...
	I0401 11:20:01.112577  653226 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18551-440344/.minikube/cert.pem
	I0401 11:20:01.112646  653226 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18551-440344/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18551-440344/.minikube/cert.pem (1123 bytes)
	I0401 11:20:01.112738  653226 exec_runner.go:144] found /home/jenkins/minikube-integration/18551-440344/.minikube/key.pem, removing ...
	I0401 11:20:01.112744  653226 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18551-440344/.minikube/key.pem
	I0401 11:20:01.112805  653226 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18551-440344/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18551-440344/.minikube/key.pem (1679 bytes)
	I0401 11:20:01.112864  653226 exec_runner.go:144] found /home/jenkins/minikube-integration/18551-440344/.minikube/ca.pem, removing ...
	I0401 11:20:01.112876  653226 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18551-440344/.minikube/ca.pem
	I0401 11:20:01.112901  653226 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18551-440344/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18551-440344/.minikube/ca.pem (1078 bytes)
	I0401 11:20:01.112947  653226 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18551-440344/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18551-440344/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18551-440344/.minikube/certs/ca-key.pem org=jenkins.embed-certs-300026 san=[127.0.0.1 192.168.76.2 embed-certs-300026 localhost minikube]
	I0401 11:20:01.749258  653226 provision.go:177] copyRemoteCerts
	I0401 11:20:01.749336  653226 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 11:20:01.749386  653226 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-300026
	I0401 11:20:01.766264  653226 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33476 SSHKeyPath:/home/jenkins/minikube-integration/18551-440344/.minikube/machines/embed-certs-300026/id_rsa Username:docker}
	I0401 11:20:01.870603  653226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18551-440344/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0401 11:20:01.897838  653226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18551-440344/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0401 11:20:01.925716  653226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18551-440344/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0401 11:20:01.951890  653226 provision.go:87] duration metric: took 858.372935ms to configureAuth
	I0401 11:20:01.951967  653226 ubuntu.go:193] setting minikube options for container-runtime
	I0401 11:20:01.952186  653226 config.go:182] Loaded profile config "embed-certs-300026": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0401 11:20:01.952222  653226 machine.go:97] duration metric: took 1.531807232s to provisionDockerMachine
	I0401 11:20:01.952238  653226 client.go:171] duration metric: took 9.040683619s to LocalClient.Create
	I0401 11:20:01.952258  653226 start.go:167] duration metric: took 9.040746518s to libmachine.API.Create "embed-certs-300026"
	I0401 11:20:01.952265  653226 start.go:293] postStartSetup for "embed-certs-300026" (driver="docker")
	I0401 11:20:01.952276  653226 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 11:20:01.952352  653226 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 11:20:01.952418  653226 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-300026
	I0401 11:20:01.967537  653226 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33476 SSHKeyPath:/home/jenkins/minikube-integration/18551-440344/.minikube/machines/embed-certs-300026/id_rsa Username:docker}
	I0401 11:20:02.067495  653226 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 11:20:02.071104  653226 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0401 11:20:02.071151  653226 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0401 11:20:02.071163  653226 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0401 11:20:02.071174  653226 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0401 11:20:02.071189  653226 filesync.go:126] Scanning /home/jenkins/minikube-integration/18551-440344/.minikube/addons for local assets ...
	I0401 11:20:02.071256  653226 filesync.go:126] Scanning /home/jenkins/minikube-integration/18551-440344/.minikube/files for local assets ...
	I0401 11:20:02.071337  653226 filesync.go:149] local asset: /home/jenkins/minikube-integration/18551-440344/.minikube/files/etc/ssl/certs/4457542.pem -> 4457542.pem in /etc/ssl/certs
	I0401 11:20:02.071449  653226 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 11:20:02.081475  653226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18551-440344/.minikube/files/etc/ssl/certs/4457542.pem --> /etc/ssl/certs/4457542.pem (1708 bytes)
	I0401 11:20:02.109629  653226 start.go:296] duration metric: took 157.34857ms for postStartSetup
	I0401 11:20:02.110026  653226 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-300026
	I0401 11:20:02.126024  653226 profile.go:143] Saving config to /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/embed-certs-300026/config.json ...
	I0401 11:20:02.126328  653226 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 11:20:02.126390  653226 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-300026
	I0401 11:20:02.141954  653226 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33476 SSHKeyPath:/home/jenkins/minikube-integration/18551-440344/.minikube/machines/embed-certs-300026/id_rsa Username:docker}
	I0401 11:20:02.242503  653226 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0401 11:20:02.247503  653226 start.go:128] duration metric: took 9.338952513s to createHost
	I0401 11:20:02.247532  653226 start.go:83] releasing machines lock for "embed-certs-300026", held for 9.339109038s
	I0401 11:20:02.247608  653226 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-300026
	I0401 11:20:02.263396  653226 ssh_runner.go:195] Run: cat /version.json
	I0401 11:20:02.263452  653226 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-300026
	I0401 11:20:02.263547  653226 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 11:20:02.263610  653226 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-300026
	I0401 11:20:02.291121  653226 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33476 SSHKeyPath:/home/jenkins/minikube-integration/18551-440344/.minikube/machines/embed-certs-300026/id_rsa Username:docker}
	I0401 11:20:02.293719  653226 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33476 SSHKeyPath:/home/jenkins/minikube-integration/18551-440344/.minikube/machines/embed-certs-300026/id_rsa Username:docker}
	I0401 11:20:02.517006  653226 ssh_runner.go:195] Run: systemctl --version
	I0401 11:20:02.521491  653226 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0401 11:20:02.525341  653226 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0401 11:20:02.551792  653226 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0401 11:20:02.551886  653226 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 11:20:02.591248  653226 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0401 11:20:02.591270  653226 start.go:494] detecting cgroup driver to use...
	I0401 11:20:02.591306  653226 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0401 11:20:02.591368  653226 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0401 11:20:02.606685  653226 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0401 11:20:02.620652  653226 docker.go:217] disabling cri-docker service (if available) ...
	I0401 11:20:02.620721  653226 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 11:20:02.635981  653226 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 11:20:02.649798  653226 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 11:19:57.876043  643361 logs.go:123] Gathering logs for storage-provisioner [27fe2520ebb746e49d07fcf1ad334f5c492ba78c197570a874201267d3ffaa3f] ...
	I0401 11:19:57.879469  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27fe2520ebb746e49d07fcf1ad334f5c492ba78c197570a874201267d3ffaa3f"
	I0401 11:19:57.918548  643361 logs.go:123] Gathering logs for containerd ...
	I0401 11:19:57.918575  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0401 11:19:57.976403  643361 logs.go:123] Gathering logs for container status ...
	I0401 11:19:57.976486  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 11:19:58.038394  643361 logs.go:123] Gathering logs for kubelet ...
	I0401 11:19:58.038564  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0401 11:19:58.093664  643361 logs.go:138] Found kubelet problem: Apr 01 11:14:27 old-k8s-version-869040 kubelet[661]: E0401 11:14:27.253107     661 reflector.go:138] object-"kube-system"/"coredns-token-l8rmx": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-l8rmx" is forbidden: User "system:node:old-k8s-version-869040" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-869040' and this object
	W0401 11:19:58.093916  643361 logs.go:138] Found kubelet problem: Apr 01 11:14:27 old-k8s-version-869040 kubelet[661]: E0401 11:14:27.253340     661 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-869040" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-869040' and this object
	W0401 11:19:58.094136  643361 logs.go:138] Found kubelet problem: Apr 01 11:14:27 old-k8s-version-869040 kubelet[661]: E0401 11:14:27.253531     661 reflector.go:138] object-"kube-system"/"kindnet-token-crnqw": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-crnqw" is forbidden: User "system:node:old-k8s-version-869040" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-869040' and this object
	W0401 11:19:58.094352  643361 logs.go:138] Found kubelet problem: Apr 01 11:14:27 old-k8s-version-869040 kubelet[661]: E0401 11:14:27.253724     661 reflector.go:138] object-"kube-system"/"kube-proxy-token-26j8b": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-26j8b" is forbidden: User "system:node:old-k8s-version-869040" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-869040' and this object
	W0401 11:19:58.094561  643361 logs.go:138] Found kubelet problem: Apr 01 11:14:27 old-k8s-version-869040 kubelet[661]: E0401 11:14:27.253919     661 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-869040" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-869040' and this object
	W0401 11:19:58.094783  643361 logs.go:138] Found kubelet problem: Apr 01 11:14:27 old-k8s-version-869040 kubelet[661]: E0401 11:14:27.254131     661 reflector.go:138] object-"kube-system"/"metrics-server-token-p8whl": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-p8whl" is forbidden: User "system:node:old-k8s-version-869040" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-869040' and this object
	W0401 11:19:58.095010  643361 logs.go:138] Found kubelet problem: Apr 01 11:14:27 old-k8s-version-869040 kubelet[661]: E0401 11:14:27.254336     661 reflector.go:138] object-"kube-system"/"storage-provisioner-token-w57c5": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-w57c5" is forbidden: User "system:node:old-k8s-version-869040" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-869040' and this object
	W0401 11:19:58.095219  643361 logs.go:138] Found kubelet problem: Apr 01 11:14:27 old-k8s-version-869040 kubelet[661]: E0401 11:14:27.254517     661 reflector.go:138] object-"default"/"default-token-ldjjx": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-ldjjx" is forbidden: User "system:node:old-k8s-version-869040" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-869040' and this object
	W0401 11:19:58.105836  643361 logs.go:138] Found kubelet problem: Apr 01 11:14:30 old-k8s-version-869040 kubelet[661]: E0401 11:14:30.412495     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0401 11:19:58.106028  643361 logs.go:138] Found kubelet problem: Apr 01 11:14:30 old-k8s-version-869040 kubelet[661]: E0401 11:14:30.876693     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:19:58.108782  643361 logs.go:138] Found kubelet problem: Apr 01 11:14:45 old-k8s-version-869040 kubelet[661]: E0401 11:14:45.796344     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0401 11:19:58.109112  643361 logs.go:138] Found kubelet problem: Apr 01 11:14:46 old-k8s-version-869040 kubelet[661]: E0401 11:14:46.235753     661 reflector.go:138] object-"kubernetes-dashboard"/"kubernetes-dashboard-token-llv5m": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kubernetes-dashboard-token-llv5m" is forbidden: User "system:node:old-k8s-version-869040" cannot list resource "secrets" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'old-k8s-version-869040' and this object
	W0401 11:19:58.111078  643361 logs.go:138] Found kubelet problem: Apr 01 11:14:57 old-k8s-version-869040 kubelet[661]: E0401 11:14:57.947477     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:19:58.111265  643361 logs.go:138] Found kubelet problem: Apr 01 11:14:58 old-k8s-version-869040 kubelet[661]: E0401 11:14:58.786263     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:19:58.111591  643361 logs.go:138] Found kubelet problem: Apr 01 11:14:58 old-k8s-version-869040 kubelet[661]: E0401 11:14:58.962984     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:19:58.112252  643361 logs.go:138] Found kubelet problem: Apr 01 11:15:02 old-k8s-version-869040 kubelet[661]: E0401 11:15:02.187397     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:19:58.115174  643361 logs.go:138] Found kubelet problem: Apr 01 11:15:13 old-k8s-version-869040 kubelet[661]: E0401 11:15:13.820810     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0401 11:19:58.115664  643361 logs.go:138] Found kubelet problem: Apr 01 11:15:15 old-k8s-version-869040 kubelet[661]: E0401 11:15:15.015969     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:19:58.115995  643361 logs.go:138] Found kubelet problem: Apr 01 11:15:22 old-k8s-version-869040 kubelet[661]: E0401 11:15:22.188017     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:19:58.116182  643361 logs.go:138] Found kubelet problem: Apr 01 11:15:24 old-k8s-version-869040 kubelet[661]: E0401 11:15:24.789529     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:19:58.116494  643361 logs.go:138] Found kubelet problem: Apr 01 11:15:37 old-k8s-version-869040 kubelet[661]: E0401 11:15:37.787553     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:19:58.116953  643361 logs.go:138] Found kubelet problem: Apr 01 11:15:38 old-k8s-version-869040 kubelet[661]: E0401 11:15:38.129930     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:19:58.117305  643361 logs.go:138] Found kubelet problem: Apr 01 11:15:42 old-k8s-version-869040 kubelet[661]: E0401 11:15:42.188410     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:19:58.117493  643361 logs.go:138] Found kubelet problem: Apr 01 11:15:49 old-k8s-version-869040 kubelet[661]: E0401 11:15:49.785437     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:19:58.117820  643361 logs.go:138] Found kubelet problem: Apr 01 11:15:56 old-k8s-version-869040 kubelet[661]: E0401 11:15:56.789835     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:19:58.120259  643361 logs.go:138] Found kubelet problem: Apr 01 11:16:01 old-k8s-version-869040 kubelet[661]: E0401 11:16:01.793557     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0401 11:19:58.120590  643361 logs.go:138] Found kubelet problem: Apr 01 11:16:11 old-k8s-version-869040 kubelet[661]: E0401 11:16:11.785144     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:19:58.120783  643361 logs.go:138] Found kubelet problem: Apr 01 11:16:14 old-k8s-version-869040 kubelet[661]: E0401 11:16:14.790457     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:19:58.121375  643361 logs.go:138] Found kubelet problem: Apr 01 11:16:25 old-k8s-version-869040 kubelet[661]: E0401 11:16:25.268985     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:19:58.121562  643361 logs.go:138] Found kubelet problem: Apr 01 11:16:29 old-k8s-version-869040 kubelet[661]: E0401 11:16:29.785550     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:19:58.121896  643361 logs.go:138] Found kubelet problem: Apr 01 11:16:32 old-k8s-version-869040 kubelet[661]: E0401 11:16:32.188113     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:19:58.122084  643361 logs.go:138] Found kubelet problem: Apr 01 11:16:40 old-k8s-version-869040 kubelet[661]: E0401 11:16:40.785613     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:19:58.122409  643361 logs.go:138] Found kubelet problem: Apr 01 11:16:43 old-k8s-version-869040 kubelet[661]: E0401 11:16:43.785076     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:19:58.122598  643361 logs.go:138] Found kubelet problem: Apr 01 11:16:53 old-k8s-version-869040 kubelet[661]: E0401 11:16:53.785492     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:19:58.122923  643361 logs.go:138] Found kubelet problem: Apr 01 11:16:56 old-k8s-version-869040 kubelet[661]: E0401 11:16:56.785610     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:19:58.123112  643361 logs.go:138] Found kubelet problem: Apr 01 11:17:06 old-k8s-version-869040 kubelet[661]: E0401 11:17:06.785564     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:19:58.123439  643361 logs.go:138] Found kubelet problem: Apr 01 11:17:09 old-k8s-version-869040 kubelet[661]: E0401 11:17:09.785023     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:19:58.123624  643361 logs.go:138] Found kubelet problem: Apr 01 11:17:20 old-k8s-version-869040 kubelet[661]: E0401 11:17:20.785617     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:19:58.123948  643361 logs.go:138] Found kubelet problem: Apr 01 11:17:24 old-k8s-version-869040 kubelet[661]: E0401 11:17:24.785143     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:19:58.126404  643361 logs.go:138] Found kubelet problem: Apr 01 11:17:34 old-k8s-version-869040 kubelet[661]: E0401 11:17:34.798457     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0401 11:19:58.126735  643361 logs.go:138] Found kubelet problem: Apr 01 11:17:35 old-k8s-version-869040 kubelet[661]: E0401 11:17:35.785081     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:19:58.127319  643361 logs.go:138] Found kubelet problem: Apr 01 11:17:47 old-k8s-version-869040 kubelet[661]: E0401 11:17:47.454111     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:19:58.127502  643361 logs.go:138] Found kubelet problem: Apr 01 11:17:48 old-k8s-version-869040 kubelet[661]: E0401 11:17:48.786169     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:19:58.127836  643361 logs.go:138] Found kubelet problem: Apr 01 11:17:52 old-k8s-version-869040 kubelet[661]: E0401 11:17:52.187364     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:19:58.128019  643361 logs.go:138] Found kubelet problem: Apr 01 11:18:02 old-k8s-version-869040 kubelet[661]: E0401 11:18:02.789625     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:19:58.128344  643361 logs.go:138] Found kubelet problem: Apr 01 11:18:03 old-k8s-version-869040 kubelet[661]: E0401 11:18:03.785107     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:19:58.128673  643361 logs.go:138] Found kubelet problem: Apr 01 11:18:15 old-k8s-version-869040 kubelet[661]: E0401 11:18:15.785146     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:19:58.128861  643361 logs.go:138] Found kubelet problem: Apr 01 11:18:16 old-k8s-version-869040 kubelet[661]: E0401 11:18:16.786414     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:19:58.129706  643361 logs.go:138] Found kubelet problem: Apr 01 11:18:27 old-k8s-version-869040 kubelet[661]: E0401 11:18:27.785261     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:19:58.129919  643361 logs.go:138] Found kubelet problem: Apr 01 11:18:31 old-k8s-version-869040 kubelet[661]: E0401 11:18:31.785780     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:19:58.130260  643361 logs.go:138] Found kubelet problem: Apr 01 11:18:40 old-k8s-version-869040 kubelet[661]: E0401 11:18:40.785417     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:19:58.130445  643361 logs.go:138] Found kubelet problem: Apr 01 11:18:43 old-k8s-version-869040 kubelet[661]: E0401 11:18:43.785421     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:19:58.130779  643361 logs.go:138] Found kubelet problem: Apr 01 11:18:53 old-k8s-version-869040 kubelet[661]: E0401 11:18:53.785086     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:19:58.130964  643361 logs.go:138] Found kubelet problem: Apr 01 11:18:54 old-k8s-version-869040 kubelet[661]: E0401 11:18:54.785659     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:19:58.131289  643361 logs.go:138] Found kubelet problem: Apr 01 11:19:04 old-k8s-version-869040 kubelet[661]: E0401 11:19:04.785200     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:19:58.131471  643361 logs.go:138] Found kubelet problem: Apr 01 11:19:05 old-k8s-version-869040 kubelet[661]: E0401 11:19:05.785514     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:19:58.131653  643361 logs.go:138] Found kubelet problem: Apr 01 11:19:16 old-k8s-version-869040 kubelet[661]: E0401 11:19:16.789686     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:19:58.131979  643361 logs.go:138] Found kubelet problem: Apr 01 11:19:17 old-k8s-version-869040 kubelet[661]: E0401 11:19:17.785382     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:19:58.132168  643361 logs.go:138] Found kubelet problem: Apr 01 11:19:28 old-k8s-version-869040 kubelet[661]: E0401 11:19:28.786172     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:19:58.132493  643361 logs.go:138] Found kubelet problem: Apr 01 11:19:31 old-k8s-version-869040 kubelet[661]: E0401 11:19:31.785264     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:19:58.132676  643361 logs.go:138] Found kubelet problem: Apr 01 11:19:43 old-k8s-version-869040 kubelet[661]: E0401 11:19:43.785550     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:19:58.133011  643361 logs.go:138] Found kubelet problem: Apr 01 11:19:46 old-k8s-version-869040 kubelet[661]: E0401 11:19:46.785335     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	I0401 11:19:58.133022  643361 logs.go:123] Gathering logs for describe nodes ...
	I0401 11:19:58.133037  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0401 11:19:58.386822  643361 logs.go:123] Gathering logs for etcd [e927ec7e31fb177888402d84efcb61941c46fabf9ca972372e611a63a958c932] ...
	I0401 11:19:58.386892  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e927ec7e31fb177888402d84efcb61941c46fabf9ca972372e611a63a958c932"
	I0401 11:19:58.463147  643361 logs.go:123] Gathering logs for coredns [5185cdae9a75b34ee950c9f6f1cf6ae65c67838a04ec515a67a25505b4054b46] ...
	I0401 11:19:58.463202  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5185cdae9a75b34ee950c9f6f1cf6ae65c67838a04ec515a67a25505b4054b46"
	I0401 11:19:58.567970  643361 logs.go:123] Gathering logs for kube-proxy [c2b2b6c421e068eab7bf881b707513703035897f1d7591fb54073efa33cf4466] ...
	I0401 11:19:58.567996  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2b2b6c421e068eab7bf881b707513703035897f1d7591fb54073efa33cf4466"
	I0401 11:19:58.621882  643361 logs.go:123] Gathering logs for kube-controller-manager [db524e3e456ecefc70df2862e168f564c1c378c87591f924065c027ff5cba833] ...
	I0401 11:19:58.621914  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db524e3e456ecefc70df2862e168f564c1c378c87591f924065c027ff5cba833"
	I0401 11:19:58.714298  643361 logs.go:123] Gathering logs for etcd [c0de496168287609f4df97c9320899c715145b6fa2d6529ba6a6501f46b8d0aa] ...
	I0401 11:19:58.714377  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0de496168287609f4df97c9320899c715145b6fa2d6529ba6a6501f46b8d0aa"
	I0401 11:19:58.808824  643361 logs.go:123] Gathering logs for coredns [8fb4a48cf27c6725c90da61570872834c03f4dc1865197b3f1426f0f827f894a] ...
	I0401 11:19:58.809000  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8fb4a48cf27c6725c90da61570872834c03f4dc1865197b3f1426f0f827f894a"
	I0401 11:19:58.914734  643361 logs.go:123] Gathering logs for kube-proxy [f9d360e936a135d336c2d41db86ccc367112509974d2c56fcd3fa3d7fb335b9a] ...
	I0401 11:19:58.914760  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9d360e936a135d336c2d41db86ccc367112509974d2c56fcd3fa3d7fb335b9a"
	I0401 11:19:59.000863  643361 logs.go:123] Gathering logs for kindnet [49eb449739e0b955225619deb122badd906ffcfad1c39aa19b6beda5721d93ad] ...
	I0401 11:19:59.000905  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49eb449739e0b955225619deb122badd906ffcfad1c39aa19b6beda5721d93ad"
	I0401 11:19:59.132649  643361 logs.go:123] Gathering logs for kindnet [000258366df555fd593e48dc6fd6719819779883aa429d40cd8ed39751170282] ...
	I0401 11:19:59.132683  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 000258366df555fd593e48dc6fd6719819779883aa429d40cd8ed39751170282"
	I0401 11:19:59.237488  643361 logs.go:123] Gathering logs for dmesg ...
	I0401 11:19:59.237526  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 11:19:59.268034  643361 logs.go:123] Gathering logs for kube-apiserver [cce3647fb821cde76abbc832b46d07b1e6e1ec536df027cdf7121aaf89d84d54] ...
	I0401 11:19:59.268113  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cce3647fb821cde76abbc832b46d07b1e6e1ec536df027cdf7121aaf89d84d54"
	I0401 11:19:59.436524  643361 logs.go:123] Gathering logs for kube-apiserver [e0e8f6e0a25eea42eff804ff1ba93241ee0066db840a8f6bbb42e7b3c2a34680] ...
	I0401 11:19:59.436572  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0e8f6e0a25eea42eff804ff1ba93241ee0066db840a8f6bbb42e7b3c2a34680"
	I0401 11:19:59.651913  643361 out.go:304] Setting ErrFile to fd 2...
	I0401 11:19:59.651956  643361 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0401 11:19:59.652040  643361 out.go:239] X Problems detected in kubelet:
	W0401 11:19:59.652055  643361 out.go:239]   Apr 01 11:19:17 old-k8s-version-869040 kubelet[661]: E0401 11:19:17.785382     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:19:59.652068  643361 out.go:239]   Apr 01 11:19:28 old-k8s-version-869040 kubelet[661]: E0401 11:19:28.786172     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:19:59.652081  643361 out.go:239]   Apr 01 11:19:31 old-k8s-version-869040 kubelet[661]: E0401 11:19:31.785264     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:19:59.652096  643361 out.go:239]   Apr 01 11:19:43 old-k8s-version-869040 kubelet[661]: E0401 11:19:43.785550     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:19:59.652120  643361 out.go:239]   Apr 01 11:19:46 old-k8s-version-869040 kubelet[661]: E0401 11:19:46.785335     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	I0401 11:19:59.652127  643361 out.go:304] Setting ErrFile to fd 2...
	I0401 11:19:59.652133  643361 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 11:20:02.742849  653226 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 11:20:02.870568  653226 docker.go:233] disabling docker service ...
	I0401 11:20:02.870636  653226 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 11:20:02.896418  653226 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 11:20:02.910129  653226 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 11:20:03.013628  653226 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 11:20:03.107090  653226 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 11:20:03.122077  653226 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 11:20:03.142059  653226 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0401 11:20:03.153101  653226 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0401 11:20:03.164262  653226 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0401 11:20:03.164363  653226 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0401 11:20:03.176112  653226 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0401 11:20:03.188399  653226 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0401 11:20:03.199773  653226 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0401 11:20:03.211812  653226 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 11:20:03.222383  653226 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0401 11:20:03.233044  653226 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0401 11:20:03.243892  653226 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0401 11:20:03.254309  653226 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 11:20:03.263901  653226 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 11:20:03.272933  653226 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 11:20:03.357403  653226 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0401 11:20:03.502642  653226 start.go:541] Will wait 60s for socket path /run/containerd/containerd.sock
	I0401 11:20:03.502717  653226 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0401 11:20:03.510158  653226 start.go:562] Will wait 60s for crictl version
	I0401 11:20:03.510231  653226 ssh_runner.go:195] Run: which crictl
	I0401 11:20:03.517566  653226 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 11:20:03.562263  653226 start.go:578] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.28
	RuntimeApiVersion:  v1
	I0401 11:20:03.562392  653226 ssh_runner.go:195] Run: containerd --version
	I0401 11:20:03.585923  653226 ssh_runner.go:195] Run: containerd --version
	I0401 11:20:03.611129  653226 out.go:177] * Preparing Kubernetes v1.29.3 on containerd 1.6.28 ...
	I0401 11:20:03.613611  653226 cli_runner.go:164] Run: docker network inspect embed-certs-300026 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 11:20:03.628728  653226 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0401 11:20:03.632532  653226 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 11:20:03.644178  653226 kubeadm.go:877] updating cluster {Name:embed-certs-300026 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:embed-certs-300026 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 11:20:03.644307  653226 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime containerd
	I0401 11:20:03.644369  653226 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 11:20:03.686732  653226 containerd.go:627] all images are preloaded for containerd runtime.
	I0401 11:20:03.686756  653226 containerd.go:534] Images already preloaded, skipping extraction
	I0401 11:20:03.686820  653226 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 11:20:03.724563  653226 containerd.go:627] all images are preloaded for containerd runtime.
	I0401 11:20:03.724587  653226 cache_images.go:84] Images are preloaded, skipping loading
	I0401 11:20:03.724595  653226 kubeadm.go:928] updating node { 192.168.76.2 8443 v1.29.3 containerd true true} ...
	I0401 11:20:03.724702  653226 kubeadm.go:940] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-300026 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:embed-certs-300026 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 11:20:03.724790  653226 ssh_runner.go:195] Run: sudo crictl info
	I0401 11:20:03.772612  653226 cni.go:84] Creating CNI manager for ""
	I0401 11:20:03.772637  653226 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0401 11:20:03.772647  653226 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 11:20:03.772669  653226 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-300026 NodeName:embed-certs-300026 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 11:20:03.772822  653226 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-300026"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 11:20:03.772895  653226 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0401 11:20:03.781924  653226 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 11:20:03.781993  653226 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 11:20:03.791951  653226 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0401 11:20:03.810347  653226 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 11:20:03.829172  653226 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0401 11:20:03.848389  653226 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0401 11:20:03.851970  653226 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 11:20:03.864652  653226 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 11:20:03.962846  653226 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 11:20:03.979902  653226 certs.go:68] Setting up /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/embed-certs-300026 for IP: 192.168.76.2
	I0401 11:20:03.979923  653226 certs.go:194] generating shared ca certs ...
	I0401 11:20:03.979940  653226 certs.go:226] acquiring lock for ca certs: {Name:mkcd78655f97da7a9cc32a54b546078a42807779 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 11:20:03.980145  653226 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18551-440344/.minikube/ca.key
	I0401 11:20:03.980203  653226 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18551-440344/.minikube/proxy-client-ca.key
	I0401 11:20:03.980216  653226 certs.go:256] generating profile certs ...
	I0401 11:20:03.980272  653226 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/embed-certs-300026/client.key
	I0401 11:20:03.980295  653226 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/embed-certs-300026/client.crt with IP's: []
	I0401 11:20:04.188243  653226 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/embed-certs-300026/client.crt ...
	I0401 11:20:04.188277  653226 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/embed-certs-300026/client.crt: {Name:mk94c36d1ba7c0b9c739dd55396720a4944da820 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 11:20:04.188476  653226 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/embed-certs-300026/client.key ...
	I0401 11:20:04.188493  653226 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/embed-certs-300026/client.key: {Name:mke241f78bf9842390f679b7c4476f2190ae0dda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 11:20:04.189603  653226 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/embed-certs-300026/apiserver.key.51c0a153
	I0401 11:20:04.189628  653226 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/embed-certs-300026/apiserver.crt.51c0a153 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0401 11:20:04.399370  653226 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/embed-certs-300026/apiserver.crt.51c0a153 ...
	I0401 11:20:04.399402  653226 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/embed-certs-300026/apiserver.crt.51c0a153: {Name:mkea53b27028177bc338405ba89ef46227ec1331 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 11:20:04.400011  653226 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/embed-certs-300026/apiserver.key.51c0a153 ...
	I0401 11:20:04.400029  653226 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/embed-certs-300026/apiserver.key.51c0a153: {Name:mkb4e0fde05f5a5f88a6408ca1eadd8d35220edd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 11:20:04.400487  653226 certs.go:381] copying /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/embed-certs-300026/apiserver.crt.51c0a153 -> /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/embed-certs-300026/apiserver.crt
	I0401 11:20:04.400573  653226 certs.go:385] copying /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/embed-certs-300026/apiserver.key.51c0a153 -> /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/embed-certs-300026/apiserver.key
	I0401 11:20:04.400639  653226 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/embed-certs-300026/proxy-client.key
	I0401 11:20:04.400659  653226 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/embed-certs-300026/proxy-client.crt with IP's: []
	I0401 11:20:04.578408  653226 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/embed-certs-300026/proxy-client.crt ...
	I0401 11:20:04.578440  653226 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/embed-certs-300026/proxy-client.crt: {Name:mk3e983112615f5b6bef3f45767585bd0c695cbe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 11:20:04.579028  653226 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/embed-certs-300026/proxy-client.key ...
	I0401 11:20:04.579075  653226 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/embed-certs-300026/proxy-client.key: {Name:mk56163f4d7f6105766e83696cbfb1d4602f1a07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 11:20:04.579365  653226 certs.go:484] found cert: /home/jenkins/minikube-integration/18551-440344/.minikube/certs/445754.pem (1338 bytes)
	W0401 11:20:04.579435  653226 certs.go:480] ignoring /home/jenkins/minikube-integration/18551-440344/.minikube/certs/445754_empty.pem, impossibly tiny 0 bytes
	I0401 11:20:04.579453  653226 certs.go:484] found cert: /home/jenkins/minikube-integration/18551-440344/.minikube/certs/ca-key.pem (1675 bytes)
	I0401 11:20:04.579502  653226 certs.go:484] found cert: /home/jenkins/minikube-integration/18551-440344/.minikube/certs/ca.pem (1078 bytes)
	I0401 11:20:04.579567  653226 certs.go:484] found cert: /home/jenkins/minikube-integration/18551-440344/.minikube/certs/cert.pem (1123 bytes)
	I0401 11:20:04.579599  653226 certs.go:484] found cert: /home/jenkins/minikube-integration/18551-440344/.minikube/certs/key.pem (1679 bytes)
	I0401 11:20:04.579670  653226 certs.go:484] found cert: /home/jenkins/minikube-integration/18551-440344/.minikube/files/etc/ssl/certs/4457542.pem (1708 bytes)
	I0401 11:20:04.580431  653226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18551-440344/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 11:20:04.604638  653226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18551-440344/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 11:20:04.631042  653226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18551-440344/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 11:20:04.656766  653226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18551-440344/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0401 11:20:04.682443  653226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/embed-certs-300026/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0401 11:20:04.710785  653226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/embed-certs-300026/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 11:20:04.751822  653226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/embed-certs-300026/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 11:20:04.782963  653226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/embed-certs-300026/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0401 11:20:04.816894  653226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18551-440344/.minikube/certs/445754.pem --> /usr/share/ca-certificates/445754.pem (1338 bytes)
	I0401 11:20:04.846530  653226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18551-440344/.minikube/files/etc/ssl/certs/4457542.pem --> /usr/share/ca-certificates/4457542.pem (1708 bytes)
	I0401 11:20:04.872055  653226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18551-440344/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 11:20:04.896220  653226 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0401 11:20:04.915313  653226 ssh_runner.go:195] Run: openssl version
	I0401 11:20:04.921907  653226 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 11:20:04.931560  653226 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 11:20:04.935083  653226 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 10:27 /usr/share/ca-certificates/minikubeCA.pem
	I0401 11:20:04.935161  653226 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 11:20:04.942405  653226 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 11:20:04.951665  653226 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/445754.pem && ln -fs /usr/share/ca-certificates/445754.pem /etc/ssl/certs/445754.pem"
	I0401 11:20:04.960928  653226 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/445754.pem
	I0401 11:20:04.964601  653226 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 10:33 /usr/share/ca-certificates/445754.pem
	I0401 11:20:04.964702  653226 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/445754.pem
	I0401 11:20:04.972181  653226 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/445754.pem /etc/ssl/certs/51391683.0"
	I0401 11:20:04.981709  653226 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4457542.pem && ln -fs /usr/share/ca-certificates/4457542.pem /etc/ssl/certs/4457542.pem"
	I0401 11:20:04.991317  653226 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4457542.pem
	I0401 11:20:04.994986  653226 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 10:33 /usr/share/ca-certificates/4457542.pem
	I0401 11:20:04.995060  653226 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4457542.pem
	I0401 11:20:05.003915  653226 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4457542.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 11:20:05.033688  653226 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 11:20:05.037632  653226 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0401 11:20:05.037742  653226 kubeadm.go:391] StartCluster: {Name:embed-certs-300026 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:embed-certs-300026 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 11:20:05.037836  653226 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0401 11:20:05.037904  653226 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 11:20:05.080259  653226 cri.go:89] found id: ""
	I0401 11:20:05.080376  653226 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 11:20:05.091108  653226 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 11:20:05.101248  653226 kubeadm.go:213] ignoring SystemVerification for kubeadm because of docker driver
	I0401 11:20:05.101345  653226 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 11:20:05.110889  653226 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 11:20:05.110910  653226 kubeadm.go:156] found existing configuration files:
	
	I0401 11:20:05.110986  653226 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 11:20:05.120550  653226 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 11:20:05.120662  653226 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 11:20:05.129345  653226 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 11:20:05.138511  653226 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 11:20:05.138577  653226 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 11:20:05.147688  653226 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 11:20:05.156827  653226 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 11:20:05.156913  653226 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 11:20:05.165722  653226 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 11:20:05.175533  653226 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 11:20:05.175624  653226 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 11:20:05.184644  653226 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0401 11:20:05.237094  653226 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0401 11:20:05.237156  653226 kubeadm.go:309] [preflight] Running pre-flight checks
	I0401 11:20:05.276149  653226 kubeadm.go:309] [preflight] The system verification failed. Printing the output from the verification:
	I0401 11:20:05.276226  653226 kubeadm.go:309] KERNEL_VERSION: 5.15.0-1056-aws
	I0401 11:20:05.276272  653226 kubeadm.go:309] OS: Linux
	I0401 11:20:05.276320  653226 kubeadm.go:309] CGROUPS_CPU: enabled
	I0401 11:20:05.276371  653226 kubeadm.go:309] CGROUPS_CPUACCT: enabled
	I0401 11:20:05.276420  653226 kubeadm.go:309] CGROUPS_CPUSET: enabled
	I0401 11:20:05.276470  653226 kubeadm.go:309] CGROUPS_DEVICES: enabled
	I0401 11:20:05.276520  653226 kubeadm.go:309] CGROUPS_FREEZER: enabled
	I0401 11:20:05.276571  653226 kubeadm.go:309] CGROUPS_MEMORY: enabled
	I0401 11:20:05.276618  653226 kubeadm.go:309] CGROUPS_PIDS: enabled
	I0401 11:20:05.276667  653226 kubeadm.go:309] CGROUPS_HUGETLB: enabled
	I0401 11:20:05.276715  653226 kubeadm.go:309] CGROUPS_BLKIO: enabled
	I0401 11:20:05.353830  653226 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 11:20:05.354043  653226 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 11:20:05.354187  653226 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0401 11:20:05.609584  653226 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 11:20:05.612546  653226 out.go:204]   - Generating certificates and keys ...
	I0401 11:20:05.612705  653226 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0401 11:20:05.612943  653226 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0401 11:20:05.794351  653226 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0401 11:20:05.992848  653226 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0401 11:20:06.337889  653226 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0401 11:20:07.429241  653226 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0401 11:20:08.284788  653226 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0401 11:20:08.285219  653226 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [embed-certs-300026 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0401 11:20:09.045993  653226 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0401 11:20:09.046264  653226 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-300026 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0401 11:20:09.433306  653226 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0401 11:20:09.769835  653226 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0401 11:20:10.196498  653226 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0401 11:20:10.196784  653226 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 11:20:10.804540  653226 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 11:20:11.405588  653226 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0401 11:20:09.652958  643361 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 11:20:09.667698  643361 api_server.go:72] duration metric: took 5m58.86071471s to wait for apiserver process to appear ...
	I0401 11:20:09.667722  643361 api_server.go:88] waiting for apiserver healthz status ...
	I0401 11:20:09.667756  643361 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0401 11:20:09.667812  643361 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 11:20:09.723733  643361 cri.go:89] found id: "cce3647fb821cde76abbc832b46d07b1e6e1ec536df027cdf7121aaf89d84d54"
	I0401 11:20:09.723755  643361 cri.go:89] found id: "e0e8f6e0a25eea42eff804ff1ba93241ee0066db840a8f6bbb42e7b3c2a34680"
	I0401 11:20:09.723759  643361 cri.go:89] found id: ""
	I0401 11:20:09.723767  643361 logs.go:276] 2 containers: [cce3647fb821cde76abbc832b46d07b1e6e1ec536df027cdf7121aaf89d84d54 e0e8f6e0a25eea42eff804ff1ba93241ee0066db840a8f6bbb42e7b3c2a34680]
	I0401 11:20:09.723823  643361 ssh_runner.go:195] Run: which crictl
	I0401 11:20:09.727691  643361 ssh_runner.go:195] Run: which crictl
	I0401 11:20:09.731263  643361 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0401 11:20:09.731374  643361 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 11:20:09.816767  643361 cri.go:89] found id: "c0de496168287609f4df97c9320899c715145b6fa2d6529ba6a6501f46b8d0aa"
	I0401 11:20:09.816795  643361 cri.go:89] found id: "e927ec7e31fb177888402d84efcb61941c46fabf9ca972372e611a63a958c932"
	I0401 11:20:09.816800  643361 cri.go:89] found id: ""
	I0401 11:20:09.816807  643361 logs.go:276] 2 containers: [c0de496168287609f4df97c9320899c715145b6fa2d6529ba6a6501f46b8d0aa e927ec7e31fb177888402d84efcb61941c46fabf9ca972372e611a63a958c932]
	I0401 11:20:09.816864  643361 ssh_runner.go:195] Run: which crictl
	I0401 11:20:09.834424  643361 ssh_runner.go:195] Run: which crictl
	I0401 11:20:09.839963  643361 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0401 11:20:09.840049  643361 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 11:20:09.936538  643361 cri.go:89] found id: "5185cdae9a75b34ee950c9f6f1cf6ae65c67838a04ec515a67a25505b4054b46"
	I0401 11:20:09.936559  643361 cri.go:89] found id: "8fb4a48cf27c6725c90da61570872834c03f4dc1865197b3f1426f0f827f894a"
	I0401 11:20:09.936564  643361 cri.go:89] found id: ""
	I0401 11:20:09.936575  643361 logs.go:276] 2 containers: [5185cdae9a75b34ee950c9f6f1cf6ae65c67838a04ec515a67a25505b4054b46 8fb4a48cf27c6725c90da61570872834c03f4dc1865197b3f1426f0f827f894a]
	I0401 11:20:09.936659  643361 ssh_runner.go:195] Run: which crictl
	I0401 11:20:09.944057  643361 ssh_runner.go:195] Run: which crictl
	I0401 11:20:09.948554  643361 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0401 11:20:09.948633  643361 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 11:20:10.068613  643361 cri.go:89] found id: "12b9a417beed52c0d14a146ade6251330fcf90c20e8666aa518dcf7ad2f7264e"
	I0401 11:20:10.068633  643361 cri.go:89] found id: "723bd4c79f413d52322777fc3b785237740dffca6ca180349b1c3d3f0909a986"
	I0401 11:20:10.068637  643361 cri.go:89] found id: ""
	I0401 11:20:10.068645  643361 logs.go:276] 2 containers: [12b9a417beed52c0d14a146ade6251330fcf90c20e8666aa518dcf7ad2f7264e 723bd4c79f413d52322777fc3b785237740dffca6ca180349b1c3d3f0909a986]
	I0401 11:20:10.068702  643361 ssh_runner.go:195] Run: which crictl
	I0401 11:20:10.075148  643361 ssh_runner.go:195] Run: which crictl
	I0401 11:20:10.079249  643361 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0401 11:20:10.079380  643361 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 11:20:10.167628  643361 cri.go:89] found id: "c2b2b6c421e068eab7bf881b707513703035897f1d7591fb54073efa33cf4466"
	I0401 11:20:10.167654  643361 cri.go:89] found id: "f9d360e936a135d336c2d41db86ccc367112509974d2c56fcd3fa3d7fb335b9a"
	I0401 11:20:10.167660  643361 cri.go:89] found id: ""
	I0401 11:20:10.167672  643361 logs.go:276] 2 containers: [c2b2b6c421e068eab7bf881b707513703035897f1d7591fb54073efa33cf4466 f9d360e936a135d336c2d41db86ccc367112509974d2c56fcd3fa3d7fb335b9a]
	I0401 11:20:10.167745  643361 ssh_runner.go:195] Run: which crictl
	I0401 11:20:10.178218  643361 ssh_runner.go:195] Run: which crictl
	I0401 11:20:10.184468  643361 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 11:20:10.184670  643361 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 11:20:10.273665  643361 cri.go:89] found id: "db524e3e456ecefc70df2862e168f564c1c378c87591f924065c027ff5cba833"
	I0401 11:20:10.273738  643361 cri.go:89] found id: "bc5d280a36c75d595a5c1cd4fe8b305b0c4080ad9491bcbc2be95f95859bde09"
	I0401 11:20:10.273758  643361 cri.go:89] found id: ""
	I0401 11:20:10.273782  643361 logs.go:276] 2 containers: [db524e3e456ecefc70df2862e168f564c1c378c87591f924065c027ff5cba833 bc5d280a36c75d595a5c1cd4fe8b305b0c4080ad9491bcbc2be95f95859bde09]
	I0401 11:20:10.273870  643361 ssh_runner.go:195] Run: which crictl
	I0401 11:20:10.278105  643361 ssh_runner.go:195] Run: which crictl
	I0401 11:20:10.282581  643361 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0401 11:20:10.282701  643361 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 11:20:10.332020  643361 cri.go:89] found id: "49eb449739e0b955225619deb122badd906ffcfad1c39aa19b6beda5721d93ad"
	I0401 11:20:10.332079  643361 cri.go:89] found id: "000258366df555fd593e48dc6fd6719819779883aa429d40cd8ed39751170282"
	I0401 11:20:10.332105  643361 cri.go:89] found id: ""
	I0401 11:20:10.332125  643361 logs.go:276] 2 containers: [49eb449739e0b955225619deb122badd906ffcfad1c39aa19b6beda5721d93ad 000258366df555fd593e48dc6fd6719819779883aa429d40cd8ed39751170282]
	I0401 11:20:10.332215  643361 ssh_runner.go:195] Run: which crictl
	I0401 11:20:10.336554  643361 ssh_runner.go:195] Run: which crictl
	I0401 11:20:10.340365  643361 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0401 11:20:10.340501  643361 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0401 11:20:10.392328  643361 cri.go:89] found id: "27b3a26cd6f52effeb1f6ec35ce6166315c1da769749f083642e75e517cc5ec2"
	I0401 11:20:10.392399  643361 cri.go:89] found id: "27fe2520ebb746e49d07fcf1ad334f5c492ba78c197570a874201267d3ffaa3f"
	I0401 11:20:10.392431  643361 cri.go:89] found id: ""
	I0401 11:20:10.392458  643361 logs.go:276] 2 containers: [27b3a26cd6f52effeb1f6ec35ce6166315c1da769749f083642e75e517cc5ec2 27fe2520ebb746e49d07fcf1ad334f5c492ba78c197570a874201267d3ffaa3f]
	I0401 11:20:10.392548  643361 ssh_runner.go:195] Run: which crictl
	I0401 11:20:10.396743  643361 ssh_runner.go:195] Run: which crictl
	I0401 11:20:10.425218  643361 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 11:20:10.425342  643361 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 11:20:10.563915  643361 cri.go:89] found id: "408a789b3a96ce3c6f7fd9adb8c71566a9f0d114f73314b4be1662ad97d0f023"
	I0401 11:20:10.563987  643361 cri.go:89] found id: ""
	I0401 11:20:10.564023  643361 logs.go:276] 1 containers: [408a789b3a96ce3c6f7fd9adb8c71566a9f0d114f73314b4be1662ad97d0f023]
	I0401 11:20:10.564129  643361 ssh_runner.go:195] Run: which crictl
	I0401 11:20:10.568984  643361 logs.go:123] Gathering logs for kubelet ...
	I0401 11:20:10.569065  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0401 11:20:10.632968  643361 logs.go:138] Found kubelet problem: Apr 01 11:14:27 old-k8s-version-869040 kubelet[661]: E0401 11:14:27.253107     661 reflector.go:138] object-"kube-system"/"coredns-token-l8rmx": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-l8rmx" is forbidden: User "system:node:old-k8s-version-869040" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-869040' and this object
	W0401 11:20:10.633289  643361 logs.go:138] Found kubelet problem: Apr 01 11:14:27 old-k8s-version-869040 kubelet[661]: E0401 11:14:27.253340     661 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-869040" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-869040' and this object
	W0401 11:20:10.633582  643361 logs.go:138] Found kubelet problem: Apr 01 11:14:27 old-k8s-version-869040 kubelet[661]: E0401 11:14:27.253531     661 reflector.go:138] object-"kube-system"/"kindnet-token-crnqw": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-crnqw" is forbidden: User "system:node:old-k8s-version-869040" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-869040' and this object
	W0401 11:20:10.633848  643361 logs.go:138] Found kubelet problem: Apr 01 11:14:27 old-k8s-version-869040 kubelet[661]: E0401 11:14:27.253724     661 reflector.go:138] object-"kube-system"/"kube-proxy-token-26j8b": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-26j8b" is forbidden: User "system:node:old-k8s-version-869040" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-869040' and this object
	W0401 11:20:10.634086  643361 logs.go:138] Found kubelet problem: Apr 01 11:14:27 old-k8s-version-869040 kubelet[661]: E0401 11:14:27.253919     661 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-869040" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-869040' and this object
	W0401 11:20:10.634349  643361 logs.go:138] Found kubelet problem: Apr 01 11:14:27 old-k8s-version-869040 kubelet[661]: E0401 11:14:27.254131     661 reflector.go:138] object-"kube-system"/"metrics-server-token-p8whl": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-p8whl" is forbidden: User "system:node:old-k8s-version-869040" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-869040' and this object
	W0401 11:20:10.634604  643361 logs.go:138] Found kubelet problem: Apr 01 11:14:27 old-k8s-version-869040 kubelet[661]: E0401 11:14:27.254336     661 reflector.go:138] object-"kube-system"/"storage-provisioner-token-w57c5": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-w57c5" is forbidden: User "system:node:old-k8s-version-869040" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-869040' and this object
	W0401 11:20:10.634837  643361 logs.go:138] Found kubelet problem: Apr 01 11:14:27 old-k8s-version-869040 kubelet[661]: E0401 11:14:27.254517     661 reflector.go:138] object-"default"/"default-token-ldjjx": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-ldjjx" is forbidden: User "system:node:old-k8s-version-869040" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-869040' and this object
	W0401 11:20:10.648302  643361 logs.go:138] Found kubelet problem: Apr 01 11:14:30 old-k8s-version-869040 kubelet[661]: E0401 11:14:30.412495     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0401 11:20:10.648577  643361 logs.go:138] Found kubelet problem: Apr 01 11:14:30 old-k8s-version-869040 kubelet[661]: E0401 11:14:30.876693     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:20:10.651489  643361 logs.go:138] Found kubelet problem: Apr 01 11:14:45 old-k8s-version-869040 kubelet[661]: E0401 11:14:45.796344     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0401 11:20:10.653316  643361 logs.go:138] Found kubelet problem: Apr 01 11:14:46 old-k8s-version-869040 kubelet[661]: E0401 11:14:46.235753     661 reflector.go:138] object-"kubernetes-dashboard"/"kubernetes-dashboard-token-llv5m": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kubernetes-dashboard-token-llv5m" is forbidden: User "system:node:old-k8s-version-869040" cannot list resource "secrets" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'old-k8s-version-869040' and this object
	W0401 11:20:10.656663  643361 logs.go:138] Found kubelet problem: Apr 01 11:14:57 old-k8s-version-869040 kubelet[661]: E0401 11:14:57.947477     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:20:10.656917  643361 logs.go:138] Found kubelet problem: Apr 01 11:14:58 old-k8s-version-869040 kubelet[661]: E0401 11:14:58.786263     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:20:10.657294  643361 logs.go:138] Found kubelet problem: Apr 01 11:14:58 old-k8s-version-869040 kubelet[661]: E0401 11:14:58.962984     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:20:10.658036  643361 logs.go:138] Found kubelet problem: Apr 01 11:15:02 old-k8s-version-869040 kubelet[661]: E0401 11:15:02.187397     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:20:10.660978  643361 logs.go:138] Found kubelet problem: Apr 01 11:15:13 old-k8s-version-869040 kubelet[661]: E0401 11:15:13.820810     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0401 11:20:10.663666  643361 logs.go:138] Found kubelet problem: Apr 01 11:15:15 old-k8s-version-869040 kubelet[661]: E0401 11:15:15.015969     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:20:10.664050  643361 logs.go:138] Found kubelet problem: Apr 01 11:15:22 old-k8s-version-869040 kubelet[661]: E0401 11:15:22.188017     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:20:10.664258  643361 logs.go:138] Found kubelet problem: Apr 01 11:15:24 old-k8s-version-869040 kubelet[661]: E0401 11:15:24.789529     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:20:10.664608  643361 logs.go:138] Found kubelet problem: Apr 01 11:15:37 old-k8s-version-869040 kubelet[661]: E0401 11:15:37.787553     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:20:10.666743  643361 logs.go:138] Found kubelet problem: Apr 01 11:15:38 old-k8s-version-869040 kubelet[661]: E0401 11:15:38.129930     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:20:10.667108  643361 logs.go:138] Found kubelet problem: Apr 01 11:15:42 old-k8s-version-869040 kubelet[661]: E0401 11:15:42.188410     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:20:10.667339  643361 logs.go:138] Found kubelet problem: Apr 01 11:15:49 old-k8s-version-869040 kubelet[661]: E0401 11:15:49.785437     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:20:10.667696  643361 logs.go:138] Found kubelet problem: Apr 01 11:15:56 old-k8s-version-869040 kubelet[661]: E0401 11:15:56.789835     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:20:10.670211  643361 logs.go:138] Found kubelet problem: Apr 01 11:16:01 old-k8s-version-869040 kubelet[661]: E0401 11:16:01.793557     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0401 11:20:10.671060  643361 logs.go:138] Found kubelet problem: Apr 01 11:16:11 old-k8s-version-869040 kubelet[661]: E0401 11:16:11.785144     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:20:10.671302  643361 logs.go:138] Found kubelet problem: Apr 01 11:16:14 old-k8s-version-869040 kubelet[661]: E0401 11:16:14.790457     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:20:10.673260  643361 logs.go:138] Found kubelet problem: Apr 01 11:16:25 old-k8s-version-869040 kubelet[661]: E0401 11:16:25.268985     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:20:10.673481  643361 logs.go:138] Found kubelet problem: Apr 01 11:16:29 old-k8s-version-869040 kubelet[661]: E0401 11:16:29.785550     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:20:10.673834  643361 logs.go:138] Found kubelet problem: Apr 01 11:16:32 old-k8s-version-869040 kubelet[661]: E0401 11:16:32.188113     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:20:10.674042  643361 logs.go:138] Found kubelet problem: Apr 01 11:16:40 old-k8s-version-869040 kubelet[661]: E0401 11:16:40.785613     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:20:10.674407  643361 logs.go:138] Found kubelet problem: Apr 01 11:16:43 old-k8s-version-869040 kubelet[661]: E0401 11:16:43.785076     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:20:10.674617  643361 logs.go:138] Found kubelet problem: Apr 01 11:16:53 old-k8s-version-869040 kubelet[661]: E0401 11:16:53.785492     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:20:10.675386  643361 logs.go:138] Found kubelet problem: Apr 01 11:16:56 old-k8s-version-869040 kubelet[661]: E0401 11:16:56.785610     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:20:10.675602  643361 logs.go:138] Found kubelet problem: Apr 01 11:17:06 old-k8s-version-869040 kubelet[661]: E0401 11:17:06.785564     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:20:10.675965  643361 logs.go:138] Found kubelet problem: Apr 01 11:17:09 old-k8s-version-869040 kubelet[661]: E0401 11:17:09.785023     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:20:10.676172  643361 logs.go:138] Found kubelet problem: Apr 01 11:17:20 old-k8s-version-869040 kubelet[661]: E0401 11:17:20.785617     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:20:10.676588  643361 logs.go:138] Found kubelet problem: Apr 01 11:17:24 old-k8s-version-869040 kubelet[661]: E0401 11:17:24.785143     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:20:10.679138  643361 logs.go:138] Found kubelet problem: Apr 01 11:17:34 old-k8s-version-869040 kubelet[661]: E0401 11:17:34.798457     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0401 11:20:10.679510  643361 logs.go:138] Found kubelet problem: Apr 01 11:17:35 old-k8s-version-869040 kubelet[661]: E0401 11:17:35.785081     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:20:10.680185  643361 logs.go:138] Found kubelet problem: Apr 01 11:17:47 old-k8s-version-869040 kubelet[661]: E0401 11:17:47.454111     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:20:10.680466  643361 logs.go:138] Found kubelet problem: Apr 01 11:17:48 old-k8s-version-869040 kubelet[661]: E0401 11:17:48.786169     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:20:10.680827  643361 logs.go:138] Found kubelet problem: Apr 01 11:17:52 old-k8s-version-869040 kubelet[661]: E0401 11:17:52.187364     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:20:10.681036  643361 logs.go:138] Found kubelet problem: Apr 01 11:18:02 old-k8s-version-869040 kubelet[661]: E0401 11:18:02.789625     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:20:10.681419  643361 logs.go:138] Found kubelet problem: Apr 01 11:18:03 old-k8s-version-869040 kubelet[661]: E0401 11:18:03.785107     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:20:10.681777  643361 logs.go:138] Found kubelet problem: Apr 01 11:18:15 old-k8s-version-869040 kubelet[661]: E0401 11:18:15.785146     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:20:10.681984  643361 logs.go:138] Found kubelet problem: Apr 01 11:18:16 old-k8s-version-869040 kubelet[661]: E0401 11:18:16.786414     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:20:10.682336  643361 logs.go:138] Found kubelet problem: Apr 01 11:18:27 old-k8s-version-869040 kubelet[661]: E0401 11:18:27.785261     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:20:10.682542  643361 logs.go:138] Found kubelet problem: Apr 01 11:18:31 old-k8s-version-869040 kubelet[661]: E0401 11:18:31.785780     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:20:10.682894  643361 logs.go:138] Found kubelet problem: Apr 01 11:18:40 old-k8s-version-869040 kubelet[661]: E0401 11:18:40.785417     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:20:10.683101  643361 logs.go:138] Found kubelet problem: Apr 01 11:18:43 old-k8s-version-869040 kubelet[661]: E0401 11:18:43.785421     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:20:10.683452  643361 logs.go:138] Found kubelet problem: Apr 01 11:18:53 old-k8s-version-869040 kubelet[661]: E0401 11:18:53.785086     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:20:10.683660  643361 logs.go:138] Found kubelet problem: Apr 01 11:18:54 old-k8s-version-869040 kubelet[661]: E0401 11:18:54.785659     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:20:10.684013  643361 logs.go:138] Found kubelet problem: Apr 01 11:19:04 old-k8s-version-869040 kubelet[661]: E0401 11:19:04.785200     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:20:10.684228  643361 logs.go:138] Found kubelet problem: Apr 01 11:19:05 old-k8s-version-869040 kubelet[661]: E0401 11:19:05.785514     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:20:10.684441  643361 logs.go:138] Found kubelet problem: Apr 01 11:19:16 old-k8s-version-869040 kubelet[661]: E0401 11:19:16.789686     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:20:10.684859  643361 logs.go:138] Found kubelet problem: Apr 01 11:19:17 old-k8s-version-869040 kubelet[661]: E0401 11:19:17.785382     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:20:10.685076  643361 logs.go:138] Found kubelet problem: Apr 01 11:19:28 old-k8s-version-869040 kubelet[661]: E0401 11:19:28.786172     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:20:10.685440  643361 logs.go:138] Found kubelet problem: Apr 01 11:19:31 old-k8s-version-869040 kubelet[661]: E0401 11:19:31.785264     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:20:10.685649  643361 logs.go:138] Found kubelet problem: Apr 01 11:19:43 old-k8s-version-869040 kubelet[661]: E0401 11:19:43.785550     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:20:10.686001  643361 logs.go:138] Found kubelet problem: Apr 01 11:19:46 old-k8s-version-869040 kubelet[661]: E0401 11:19:46.785335     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:20:10.686206  643361 logs.go:138] Found kubelet problem: Apr 01 11:19:58 old-k8s-version-869040 kubelet[661]: E0401 11:19:58.787551     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:20:10.686560  643361 logs.go:138] Found kubelet problem: Apr 01 11:20:01 old-k8s-version-869040 kubelet[661]: E0401 11:20:01.785197     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	I0401 11:20:10.686573  643361 logs.go:123] Gathering logs for kube-apiserver [e0e8f6e0a25eea42eff804ff1ba93241ee0066db840a8f6bbb42e7b3c2a34680] ...
	I0401 11:20:10.686597  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0e8f6e0a25eea42eff804ff1ba93241ee0066db840a8f6bbb42e7b3c2a34680"
	I0401 11:20:10.790307  643361 logs.go:123] Gathering logs for kube-scheduler [723bd4c79f413d52322777fc3b785237740dffca6ca180349b1c3d3f0909a986] ...
	I0401 11:20:10.790360  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 723bd4c79f413d52322777fc3b785237740dffca6ca180349b1c3d3f0909a986"
	I0401 11:20:10.896164  643361 logs.go:123] Gathering logs for kubernetes-dashboard [408a789b3a96ce3c6f7fd9adb8c71566a9f0d114f73314b4be1662ad97d0f023] ...
	I0401 11:20:10.896204  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 408a789b3a96ce3c6f7fd9adb8c71566a9f0d114f73314b4be1662ad97d0f023"
	I0401 11:20:10.981670  643361 logs.go:123] Gathering logs for describe nodes ...
	I0401 11:20:10.981704  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0401 11:20:11.218439  643361 logs.go:123] Gathering logs for coredns [8fb4a48cf27c6725c90da61570872834c03f4dc1865197b3f1426f0f827f894a] ...
	I0401 11:20:11.218479  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8fb4a48cf27c6725c90da61570872834c03f4dc1865197b3f1426f0f827f894a"
	I0401 11:20:11.304902  643361 logs.go:123] Gathering logs for kube-proxy [f9d360e936a135d336c2d41db86ccc367112509974d2c56fcd3fa3d7fb335b9a] ...
	I0401 11:20:11.304932  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9d360e936a135d336c2d41db86ccc367112509974d2c56fcd3fa3d7fb335b9a"
	I0401 11:20:11.387568  643361 logs.go:123] Gathering logs for storage-provisioner [27fe2520ebb746e49d07fcf1ad334f5c492ba78c197570a874201267d3ffaa3f] ...
	I0401 11:20:11.387650  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27fe2520ebb746e49d07fcf1ad334f5c492ba78c197570a874201267d3ffaa3f"
	I0401 11:20:11.437572  643361 logs.go:123] Gathering logs for containerd ...
	I0401 11:20:11.437639  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0401 11:20:11.501863  643361 logs.go:123] Gathering logs for container status ...
	I0401 11:20:11.501940  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 11:20:11.608556  643361 logs.go:123] Gathering logs for etcd [c0de496168287609f4df97c9320899c715145b6fa2d6529ba6a6501f46b8d0aa] ...
	I0401 11:20:11.608587  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0de496168287609f4df97c9320899c715145b6fa2d6529ba6a6501f46b8d0aa"
	I0401 11:20:11.663339  643361 logs.go:123] Gathering logs for coredns [5185cdae9a75b34ee950c9f6f1cf6ae65c67838a04ec515a67a25505b4054b46] ...
	I0401 11:20:11.663377  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5185cdae9a75b34ee950c9f6f1cf6ae65c67838a04ec515a67a25505b4054b46"
	I0401 11:20:11.733865  643361 logs.go:123] Gathering logs for kube-scheduler [12b9a417beed52c0d14a146ade6251330fcf90c20e8666aa518dcf7ad2f7264e] ...
	I0401 11:20:11.733901  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 12b9a417beed52c0d14a146ade6251330fcf90c20e8666aa518dcf7ad2f7264e"
	I0401 11:20:11.808863  643361 logs.go:123] Gathering logs for kube-proxy [c2b2b6c421e068eab7bf881b707513703035897f1d7591fb54073efa33cf4466] ...
	I0401 11:20:11.808901  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2b2b6c421e068eab7bf881b707513703035897f1d7591fb54073efa33cf4466"
	I0401 11:20:11.860561  643361 logs.go:123] Gathering logs for storage-provisioner [27b3a26cd6f52effeb1f6ec35ce6166315c1da769749f083642e75e517cc5ec2] ...
	I0401 11:20:11.860597  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27b3a26cd6f52effeb1f6ec35ce6166315c1da769749f083642e75e517cc5ec2"
	I0401 11:20:11.911415  643361 logs.go:123] Gathering logs for dmesg ...
	I0401 11:20:11.911449  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 11:20:11.935394  643361 logs.go:123] Gathering logs for kube-apiserver [cce3647fb821cde76abbc832b46d07b1e6e1ec536df027cdf7121aaf89d84d54] ...
	I0401 11:20:11.935644  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cce3647fb821cde76abbc832b46d07b1e6e1ec536df027cdf7121aaf89d84d54"
	I0401 11:20:12.010132  643361 logs.go:123] Gathering logs for etcd [e927ec7e31fb177888402d84efcb61941c46fabf9ca972372e611a63a958c932] ...
	I0401 11:20:12.010227  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e927ec7e31fb177888402d84efcb61941c46fabf9ca972372e611a63a958c932"
	I0401 11:20:12.065906  643361 logs.go:123] Gathering logs for kube-controller-manager [db524e3e456ecefc70df2862e168f564c1c378c87591f924065c027ff5cba833] ...
	I0401 11:20:12.065991  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db524e3e456ecefc70df2862e168f564c1c378c87591f924065c027ff5cba833"
	I0401 11:20:12.159500  643361 logs.go:123] Gathering logs for kube-controller-manager [bc5d280a36c75d595a5c1cd4fe8b305b0c4080ad9491bcbc2be95f95859bde09] ...
	I0401 11:20:12.159590  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc5d280a36c75d595a5c1cd4fe8b305b0c4080ad9491bcbc2be95f95859bde09"
	I0401 11:20:12.243372  643361 logs.go:123] Gathering logs for kindnet [49eb449739e0b955225619deb122badd906ffcfad1c39aa19b6beda5721d93ad] ...
	I0401 11:20:12.243455  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49eb449739e0b955225619deb122badd906ffcfad1c39aa19b6beda5721d93ad"
	I0401 11:20:12.295763  643361 logs.go:123] Gathering logs for kindnet [000258366df555fd593e48dc6fd6719819779883aa429d40cd8ed39751170282] ...
	I0401 11:20:12.296014  643361 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 000258366df555fd593e48dc6fd6719819779883aa429d40cd8ed39751170282"
	I0401 11:20:12.349216  643361 out.go:304] Setting ErrFile to fd 2...
	I0401 11:20:12.349287  643361 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0401 11:20:12.349362  643361 out.go:239] X Problems detected in kubelet:
	W0401 11:20:12.349406  643361 out.go:239]   Apr 01 11:19:31 old-k8s-version-869040 kubelet[661]: E0401 11:19:31.785264     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:20:12.349570  643361 out.go:239]   Apr 01 11:19:43 old-k8s-version-869040 kubelet[661]: E0401 11:19:43.785550     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:20:12.349606  643361 out.go:239]   Apr 01 11:19:46 old-k8s-version-869040 kubelet[661]: E0401 11:19:46.785335     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	W0401 11:20:12.349649  643361 out.go:239]   Apr 01 11:19:58 old-k8s-version-869040 kubelet[661]: E0401 11:19:58.787551     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0401 11:20:12.349688  643361 out.go:239]   Apr 01 11:20:01 old-k8s-version-869040 kubelet[661]: E0401 11:20:01.785197     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	I0401 11:20:12.349723  643361 out.go:304] Setting ErrFile to fd 2...
	I0401 11:20:12.349765  643361 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 11:20:12.867286  653226 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 11:20:13.293384  653226 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 11:20:14.145557  653226 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 11:20:14.146607  653226 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 11:20:14.151530  653226 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 11:20:14.154142  653226 out.go:204]   - Booting up control plane ...
	I0401 11:20:14.154248  653226 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 11:20:14.154750  653226 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 11:20:14.156203  653226 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 11:20:14.168339  653226 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 11:20:14.169558  653226 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 11:20:14.169761  653226 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0401 11:20:14.273559  653226 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0401 11:20:22.351102  643361 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0401 11:20:22.363403  643361 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0401 11:20:22.365970  643361 out.go:177] 
	W0401 11:20:22.367560  643361 out.go:239] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0401 11:20:22.367651  643361 out.go:239] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0401 11:20:22.367720  643361 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0401 11:20:22.367758  643361 out.go:239] * 
	W0401 11:20:22.368797  643361 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 11:20:22.371249  643361 out.go:177] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	422ec54d61e42       523cad1a4df73       2 minutes ago       Exited              dashboard-metrics-scraper   5                   c0d2a4813bc6b       dashboard-metrics-scraper-8d5bb5db8-l8h9p
	408a789b3a96c       20b332c9a70d8       5 minutes ago       Running             kubernetes-dashboard        0                   d0cf3b41de28a       kubernetes-dashboard-cd95d586-fllgf
	a5ea2984802f9       1611cd07b61d5       5 minutes ago       Running             busybox                     1                   3b40aca4dcf7b       busybox
	5185cdae9a75b       db91994f4ee8f       5 minutes ago       Running             coredns                     1                   e64b4bf1fd833       coredns-74ff55c5b-xnz2b
	27b3a26cd6f52       ba04bb24b9575       5 minutes ago       Running             storage-provisioner         1                   c48f7b46b7647       storage-provisioner
	c2b2b6c421e06       25a5233254979       5 minutes ago       Running             kube-proxy                  1                   631dc5e85fe3c       kube-proxy-f74rn
	49eb449739e0b       4740c1948d3fc       5 minutes ago       Running             kindnet-cni                 1                   46a9c8b221ad4       kindnet-qf8vw
	12b9a417beed5       e7605f88f17d6       6 minutes ago       Running             kube-scheduler              1                   efcf5518812f3       kube-scheduler-old-k8s-version-869040
	cce3647fb821c       2c08bbbc02d3a       6 minutes ago       Running             kube-apiserver              1                   e8d6cd08a29e5       kube-apiserver-old-k8s-version-869040
	db524e3e456ec       1df8a2b116bd1       6 minutes ago       Running             kube-controller-manager     1                   5b7149265837f       kube-controller-manager-old-k8s-version-869040
	c0de496168287       05b738aa1bc63       6 minutes ago       Running             etcd                        1                   3ba3b1e348cd8       etcd-old-k8s-version-869040
	4f3c83a29aca9       1611cd07b61d5       6 minutes ago       Exited              busybox                     0                   73da52a14a075       busybox
	8fb4a48cf27c6       db91994f4ee8f       8 minutes ago       Exited              coredns                     0                   db73cc36dd8de       coredns-74ff55c5b-xnz2b
	27fe2520ebb74       ba04bb24b9575       8 minutes ago       Exited              storage-provisioner         0                   e4e48279b3e78       storage-provisioner
	000258366df55       4740c1948d3fc       8 minutes ago       Exited              kindnet-cni                 0                   240ae8ca4e57d       kindnet-qf8vw
	f9d360e936a13       25a5233254979       8 minutes ago       Exited              kube-proxy                  0                   4d626ae1dc7e4       kube-proxy-f74rn
	723bd4c79f413       e7605f88f17d6       9 minutes ago       Exited              kube-scheduler              0                   689c33610032b       kube-scheduler-old-k8s-version-869040
	bc5d280a36c75       1df8a2b116bd1       9 minutes ago       Exited              kube-controller-manager     0                   0e1fc44be1177       kube-controller-manager-old-k8s-version-869040
	e0e8f6e0a25ee       2c08bbbc02d3a       9 minutes ago       Exited              kube-apiserver              0                   a58c49aa4b652       kube-apiserver-old-k8s-version-869040
	e927ec7e31fb1       05b738aa1bc63       9 minutes ago       Exited              etcd                        0                   5dedc53504d6c       etcd-old-k8s-version-869040
	
	
	==> containerd <==
	Apr 01 11:16:01 old-k8s-version-869040 containerd[568]: time="2024-04-01T11:16:01.790979689Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" host=fake.domain
	Apr 01 11:16:01 old-k8s-version-869040 containerd[568]: time="2024-04-01T11:16:01.793005801Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Apr 01 11:16:24 old-k8s-version-869040 containerd[568]: time="2024-04-01T11:16:24.787208454Z" level=info msg="CreateContainer within sandbox \"c0d2a4813bc6b432b60e4fcca478b5cea67706bf94eb269053247b6e820d2422\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:4,}"
	Apr 01 11:16:24 old-k8s-version-869040 containerd[568]: time="2024-04-01T11:16:24.806954723Z" level=info msg="CreateContainer within sandbox \"c0d2a4813bc6b432b60e4fcca478b5cea67706bf94eb269053247b6e820d2422\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:4,} returns container id \"34bce6a30fa0f57f511945e7c05b45f64ecd5e6414b763db3a3185ed39a94ea1\""
	Apr 01 11:16:24 old-k8s-version-869040 containerd[568]: time="2024-04-01T11:16:24.807667811Z" level=info msg="StartContainer for \"34bce6a30fa0f57f511945e7c05b45f64ecd5e6414b763db3a3185ed39a94ea1\""
	Apr 01 11:16:24 old-k8s-version-869040 containerd[568]: time="2024-04-01T11:16:24.885243111Z" level=info msg="StartContainer for \"34bce6a30fa0f57f511945e7c05b45f64ecd5e6414b763db3a3185ed39a94ea1\" returns successfully"
	Apr 01 11:16:24 old-k8s-version-869040 containerd[568]: time="2024-04-01T11:16:24.911760661Z" level=info msg="shim disconnected" id=34bce6a30fa0f57f511945e7c05b45f64ecd5e6414b763db3a3185ed39a94ea1
	Apr 01 11:16:24 old-k8s-version-869040 containerd[568]: time="2024-04-01T11:16:24.911823116Z" level=warning msg="cleaning up after shim disconnected" id=34bce6a30fa0f57f511945e7c05b45f64ecd5e6414b763db3a3185ed39a94ea1 namespace=k8s.io
	Apr 01 11:16:24 old-k8s-version-869040 containerd[568]: time="2024-04-01T11:16:24.911835694Z" level=info msg="cleaning up dead shim"
	Apr 01 11:16:24 old-k8s-version-869040 containerd[568]: time="2024-04-01T11:16:24.920717811Z" level=warning msg="cleanup warnings time=\"2024-04-01T11:16:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2993 runtime=io.containerd.runc.v2\n"
	Apr 01 11:16:25 old-k8s-version-869040 containerd[568]: time="2024-04-01T11:16:25.275781051Z" level=info msg="RemoveContainer for \"fa69ef169c7be011574eb98baaa609070715859db78beba2587bbb7ed3d9b786\""
	Apr 01 11:16:25 old-k8s-version-869040 containerd[568]: time="2024-04-01T11:16:25.285702557Z" level=info msg="RemoveContainer for \"fa69ef169c7be011574eb98baaa609070715859db78beba2587bbb7ed3d9b786\" returns successfully"
	Apr 01 11:17:34 old-k8s-version-869040 containerd[568]: time="2024-04-01T11:17:34.790083896Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 01 11:17:34 old-k8s-version-869040 containerd[568]: time="2024-04-01T11:17:34.795551155Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" host=fake.domain
	Apr 01 11:17:34 old-k8s-version-869040 containerd[568]: time="2024-04-01T11:17:34.797599141Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Apr 01 11:17:46 old-k8s-version-869040 containerd[568]: time="2024-04-01T11:17:46.787616112Z" level=info msg="CreateContainer within sandbox \"c0d2a4813bc6b432b60e4fcca478b5cea67706bf94eb269053247b6e820d2422\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:5,}"
	Apr 01 11:17:46 old-k8s-version-869040 containerd[568]: time="2024-04-01T11:17:46.807520449Z" level=info msg="CreateContainer within sandbox \"c0d2a4813bc6b432b60e4fcca478b5cea67706bf94eb269053247b6e820d2422\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:5,} returns container id \"422ec54d61e42c6c56a11f489f59cfb5bb074a9121b293a317b5625875fb4ecd\""
	Apr 01 11:17:46 old-k8s-version-869040 containerd[568]: time="2024-04-01T11:17:46.808150225Z" level=info msg="StartContainer for \"422ec54d61e42c6c56a11f489f59cfb5bb074a9121b293a317b5625875fb4ecd\""
	Apr 01 11:17:46 old-k8s-version-869040 containerd[568]: time="2024-04-01T11:17:46.860479903Z" level=info msg="StartContainer for \"422ec54d61e42c6c56a11f489f59cfb5bb074a9121b293a317b5625875fb4ecd\" returns successfully"
	Apr 01 11:17:46 old-k8s-version-869040 containerd[568]: time="2024-04-01T11:17:46.885761722Z" level=info msg="shim disconnected" id=422ec54d61e42c6c56a11f489f59cfb5bb074a9121b293a317b5625875fb4ecd
	Apr 01 11:17:46 old-k8s-version-869040 containerd[568]: time="2024-04-01T11:17:46.885966008Z" level=warning msg="cleaning up after shim disconnected" id=422ec54d61e42c6c56a11f489f59cfb5bb074a9121b293a317b5625875fb4ecd namespace=k8s.io
	Apr 01 11:17:46 old-k8s-version-869040 containerd[568]: time="2024-04-01T11:17:46.885990983Z" level=info msg="cleaning up dead shim"
	Apr 01 11:17:46 old-k8s-version-869040 containerd[568]: time="2024-04-01T11:17:46.894426204Z" level=warning msg="cleanup warnings time=\"2024-04-01T11:17:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3226 runtime=io.containerd.runc.v2\n"
	Apr 01 11:17:47 old-k8s-version-869040 containerd[568]: time="2024-04-01T11:17:47.455092897Z" level=info msg="RemoveContainer for \"34bce6a30fa0f57f511945e7c05b45f64ecd5e6414b763db3a3185ed39a94ea1\""
	Apr 01 11:17:47 old-k8s-version-869040 containerd[568]: time="2024-04-01T11:17:47.460970065Z" level=info msg="RemoveContainer for \"34bce6a30fa0f57f511945e7c05b45f64ecd5e6414b763db3a3185ed39a94ea1\" returns successfully"
	
	
	==> coredns [5185cdae9a75b34ee950c9f6f1cf6ae65c67838a04ec515a67a25505b4054b46] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 093a0bf1423dd8c4eee62372bb216168
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:37406 - 30725 "HINFO IN 3181792832775074405.1741536105316154471. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.024487324s
	
	
	==> coredns [8fb4a48cf27c6725c90da61570872834c03f4dc1865197b3f1426f0f827f894a] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 093a0bf1423dd8c4eee62372bb216168
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:40320 - 43037 "HINFO IN 5382151021479118010.7703089194899749577. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.032414243s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-869040
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-869040
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8aa0d860b7e6047018bc1a9124397cd2c931e0d
	                    minikube.k8s.io/name=old-k8s-version-869040
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_01T11_11_27_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Apr 2024 11:11:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-869040
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Apr 2024 11:20:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Apr 2024 11:15:27 +0000   Mon, 01 Apr 2024 11:11:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Apr 2024 11:15:27 +0000   Mon, 01 Apr 2024 11:11:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Apr 2024 11:15:27 +0000   Mon, 01 Apr 2024 11:11:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Apr 2024 11:15:27 +0000   Mon, 01 Apr 2024 11:11:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-869040
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022560Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022560Ki
	  pods:               110
	System Info:
	  Machine ID:                 034bba1788f74661aa565ffb4f4f8756
	  System UUID:                6308c6e8-fe6f-43bc-a359-75061646b9bc
	  Boot ID:                    2e0ae28a-b3da-4fcf-af6c-d595b2697792
	  Kernel Version:             5.15.0-1056-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.28
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m45s
	  kube-system                 coredns-74ff55c5b-xnz2b                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     8m40s
	  kube-system                 etcd-old-k8s-version-869040                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         8m48s
	  kube-system                 kindnet-qf8vw                                     100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      8m40s
	  kube-system                 kube-apiserver-old-k8s-version-869040             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m48s
	  kube-system                 kube-controller-manager-old-k8s-version-869040    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m48s
	  kube-system                 kube-proxy-f74rn                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m40s
	  kube-system                 kube-scheduler-old-k8s-version-869040             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m48s
	  kube-system                 metrics-server-9975d5f86-hltl7                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (2%!)(MISSING)       0 (0%!)(MISSING)         6m34s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m39s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-l8h9p         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m38s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-fllgf               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%!)(MISSING)  100m (5%!)(MISSING)
	  memory             420Mi (5%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  100Mi (0%!)(MISSING)  0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  Starting                 8m49s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m49s                kubelet     Node old-k8s-version-869040 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m49s                kubelet     Node old-k8s-version-869040 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m49s                kubelet     Node old-k8s-version-869040 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m49s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m40s                kubelet     Node old-k8s-version-869040 status is now: NodeReady
	  Normal  Starting                 8m39s                kube-proxy  Starting kube-proxy.
	  Normal  Starting                 6m6s                 kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m6s (x8 over 6m6s)  kubelet     Node old-k8s-version-869040 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m6s (x7 over 6m6s)  kubelet     Node old-k8s-version-869040 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m6s (x8 over 6m6s)  kubelet     Node old-k8s-version-869040 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m6s                 kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m55s                kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[  +0.000823] FS-Cache: N-cookie c=0000001f [p=00000015 fl=2 nc=0 na=1]
	[  +0.001004] FS-Cache: N-cookie d=00000000fed3411a{9p.inode} n=000000006e7db40a
	[  +0.001198] FS-Cache: N-key=[8] '0e6fed0000000000'
	[  +2.858169] FS-Cache: Duplicate cookie detected
	[  +0.000817] FS-Cache: O-cookie c=00000016 [p=00000015 fl=226 nc=0 na=1]
	[  +0.001069] FS-Cache: O-cookie d=00000000fed3411a{9p.inode} n=000000004b3c27b8
	[  +0.001155] FS-Cache: O-key=[8] '0d6fed0000000000'
	[  +0.000953] FS-Cache: N-cookie c=00000021 [p=00000015 fl=2 nc=0 na=1]
	[  +0.001072] FS-Cache: N-cookie d=00000000fed3411a{9p.inode} n=000000005c51392d
	[  +0.001158] FS-Cache: N-key=[8] '0d6fed0000000000'
	[  +0.369704] FS-Cache: Duplicate cookie detected
	[  +0.000760] FS-Cache: O-cookie c=0000001b [p=00000015 fl=226 nc=0 na=1]
	[  +0.001011] FS-Cache: O-cookie d=00000000fed3411a{9p.inode} n=000000003f3a3b58
	[  +0.001066] FS-Cache: O-key=[8] '136fed0000000000'
	[  +0.000718] FS-Cache: N-cookie c=00000022 [p=00000015 fl=2 nc=0 na=1]
	[  +0.000940] FS-Cache: N-cookie d=00000000fed3411a{9p.inode} n=000000001b9631bf
	[  +0.001039] FS-Cache: N-key=[8] '136fed0000000000'
	[  +3.856688] FS-Cache: Duplicate cookie detected
	[  +0.000735] FS-Cache: O-cookie c=00000025 [p=00000002 fl=222 nc=0 na=1]
	[  +0.001072] FS-Cache: O-cookie d=00000000cf35ac13{9P.session} n=00000000ac3dc4e7
	[  +0.001167] FS-Cache: O-key=[10] '34323936393734333236'
	[  +0.000759] FS-Cache: N-cookie c=00000026 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000916] FS-Cache: N-cookie d=00000000cf35ac13{9P.session} n=000000001cb86ccf
	[  +0.001111] FS-Cache: N-key=[10] '34323936393734333236'
	[ +23.384444] systemd-journald[221]: Failed to send WATCHDOG=1 notification message: Connection refused
	
	
	==> etcd [c0de496168287609f4df97c9320899c715145b6fa2d6529ba6a6501f46b8d0aa] <==
	2024-04-01 11:16:21.673489 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-01 11:16:31.673448 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-01 11:16:41.673424 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-01 11:16:51.673453 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-01 11:17:01.674173 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-01 11:17:11.673294 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-01 11:17:21.673331 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-01 11:17:31.673550 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-01 11:17:41.673404 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-01 11:17:51.674055 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-01 11:18:01.673323 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-01 11:18:11.673289 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-01 11:18:21.673295 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-01 11:18:31.673429 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-01 11:18:41.673353 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-01 11:18:51.673234 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-01 11:19:01.673346 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-01 11:19:11.673411 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-01 11:19:21.673324 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-01 11:19:31.673402 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-01 11:19:41.673303 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-01 11:19:51.673499 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-01 11:20:01.673310 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-01 11:20:11.673412 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-01 11:20:21.673283 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> etcd [e927ec7e31fb177888402d84efcb61941c46fabf9ca972372e611a63a958c932] <==
	raft2024/04/01 11:11:18 INFO: 9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2
	raft2024/04/01 11:11:18 INFO: 9f0758e1c58a86ed became leader at term 2
	raft2024/04/01 11:11:18 INFO: raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2
	2024-04-01 11:11:18.397937 I | etcdserver: published {Name:old-k8s-version-869040 ClientURLs:[https://192.168.85.2:2379]} to cluster 68eaea490fab4e05
	2024-04-01 11:11:18.401121 I | embed: ready to serve client requests
	2024-04-01 11:11:18.402574 I | embed: serving client requests on 192.168.85.2:2379
	2024-04-01 11:11:18.402783 I | etcdserver: setting up the initial cluster version to 3.4
	2024-04-01 11:11:18.403097 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-04-01 11:11:18.403259 I | etcdserver/api: enabled capabilities for version 3.4
	2024-04-01 11:11:18.403349 I | embed: ready to serve client requests
	2024-04-01 11:11:18.408072 I | embed: serving client requests on 127.0.0.1:2379
	2024-04-01 11:11:43.511158 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-01 11:11:44.559689 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-01 11:11:54.555836 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-01 11:12:04.555995 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-01 11:12:14.555969 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-01 11:12:24.555804 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-01 11:12:34.555959 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-01 11:12:44.555814 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-01 11:12:54.555876 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-01 11:13:04.555752 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-01 11:13:14.555950 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-01 11:13:24.555899 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-01 11:13:34.555958 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-01 11:13:44.555975 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 11:20:24 up  3:02,  0 users,  load average: 3.35, 2.51, 2.74
	Linux old-k8s-version-869040 5.15.0-1056-aws #61~20.04.1-Ubuntu SMP Wed Mar 13 17:45:04 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [000258366df555fd593e48dc6fd6719819779883aa429d40cd8ed39751170282] <==
	I0401 11:11:46.179727       1 main.go:227] handling current node
	I0401 11:11:56.197823       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0401 11:11:56.197854       1 main.go:227] handling current node
	I0401 11:12:06.215760       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0401 11:12:06.215792       1 main.go:227] handling current node
	I0401 11:12:16.231866       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0401 11:12:16.231893       1 main.go:227] handling current node
	I0401 11:12:26.248958       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0401 11:12:26.248986       1 main.go:227] handling current node
	I0401 11:12:36.265813       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0401 11:12:36.265841       1 main.go:227] handling current node
	I0401 11:12:46.280113       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0401 11:12:46.280184       1 main.go:227] handling current node
	I0401 11:12:56.305386       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0401 11:12:56.305419       1 main.go:227] handling current node
	I0401 11:13:06.322762       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0401 11:13:06.322789       1 main.go:227] handling current node
	I0401 11:13:16.336476       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0401 11:13:16.336503       1 main.go:227] handling current node
	I0401 11:13:26.345897       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0401 11:13:26.345926       1 main.go:227] handling current node
	I0401 11:13:36.361762       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0401 11:13:36.361800       1 main.go:227] handling current node
	I0401 11:13:46.381755       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0401 11:13:46.382766       1 main.go:227] handling current node
	
	
	==> kindnet [49eb449739e0b955225619deb122badd906ffcfad1c39aa19b6beda5721d93ad] <==
	I0401 11:18:19.909378       1 main.go:227] handling current node
	I0401 11:18:29.916437       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0401 11:18:29.916463       1 main.go:227] handling current node
	I0401 11:18:39.930347       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0401 11:18:39.930384       1 main.go:227] handling current node
	I0401 11:18:49.934069       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0401 11:18:49.934097       1 main.go:227] handling current node
	I0401 11:18:59.941427       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0401 11:18:59.941463       1 main.go:227] handling current node
	I0401 11:19:09.947476       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0401 11:19:09.947509       1 main.go:227] handling current node
	I0401 11:19:19.958863       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0401 11:19:19.958892       1 main.go:227] handling current node
	I0401 11:19:29.970810       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0401 11:19:29.971053       1 main.go:227] handling current node
	I0401 11:19:39.983941       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0401 11:19:39.984120       1 main.go:227] handling current node
	I0401 11:19:49.993708       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0401 11:19:49.993872       1 main.go:227] handling current node
	I0401 11:20:00.489581       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0401 11:20:00.489619       1 main.go:227] handling current node
	I0401 11:20:10.515947       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0401 11:20:10.515973       1 main.go:227] handling current node
	I0401 11:20:20.661339       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0401 11:20:20.661374       1 main.go:227] handling current node
	
	
	==> kube-apiserver [cce3647fb821cde76abbc832b46d07b1e6e1ec536df027cdf7121aaf89d84d54] <==
	I0401 11:16:58.216013       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0401 11:16:58.216022       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0401 11:17:29.755874       1 handler_proxy.go:102] no RequestInfo found in the context
	E0401 11:17:29.756063       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0401 11:17:29.756080       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0401 11:17:40.679195       1 client.go:360] parsed scheme: "passthrough"
	I0401 11:17:40.679238       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0401 11:17:40.679248       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0401 11:18:22.473353       1 client.go:360] parsed scheme: "passthrough"
	I0401 11:18:22.473396       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0401 11:18:22.473405       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0401 11:18:56.513270       1 client.go:360] parsed scheme: "passthrough"
	I0401 11:18:56.513315       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0401 11:18:56.513324       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0401 11:19:28.350700       1 handler_proxy.go:102] no RequestInfo found in the context
	E0401 11:19:28.350781       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0401 11:19:28.350800       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0401 11:19:29.019626       1 client.go:360] parsed scheme: "passthrough"
	I0401 11:19:29.019674       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0401 11:19:29.019683       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0401 11:20:03.504000       1 client.go:360] parsed scheme: "passthrough"
	I0401 11:20:03.504044       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0401 11:20:03.504053       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-apiserver [e0e8f6e0a25eea42eff804ff1ba93241ee0066db840a8f6bbb42e7b3c2a34680] <==
	I0401 11:11:24.989496       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0401 11:11:24.989585       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0401 11:11:25.025589       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0401 11:11:25.031408       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0401 11:11:25.031489       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0401 11:11:25.547997       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0401 11:11:25.589107       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0401 11:11:25.669277       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I0401 11:11:25.670467       1 controller.go:606] quota admission added evaluator for: endpoints
	I0401 11:11:25.674564       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0401 11:11:26.676424       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0401 11:11:27.139065       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0401 11:11:27.239206       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0401 11:11:35.614565       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0401 11:11:44.033496       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I0401 11:11:44.155484       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I0401 11:11:49.547050       1 client.go:360] parsed scheme: "passthrough"
	I0401 11:11:49.547142       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0401 11:11:49.547152       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0401 11:12:31.728282       1 client.go:360] parsed scheme: "passthrough"
	I0401 11:12:31.728326       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0401 11:12:31.728335       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0401 11:13:16.589089       1 client.go:360] parsed scheme: "passthrough"
	I0401 11:13:16.589131       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0401 11:13:16.589148       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [bc5d280a36c75d595a5c1cd4fe8b305b0c4080ad9491bcbc2be95f95859bde09] <==
	I0401 11:11:44.014622       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I0401 11:11:44.014727       1 shared_informer.go:247] Caches are synced for PV protection 
	I0401 11:11:44.025181       1 shared_informer.go:247] Caches are synced for expand 
	E0401 11:11:44.053521       1 clusterroleaggregation_controller.go:181] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
	I0401 11:11:44.097170       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
	I0401 11:11:44.104599       1 shared_informer.go:247] Caches are synced for stateful set 
	I0401 11:11:44.113121       1 shared_informer.go:247] Caches are synced for resource quota 
	I0401 11:11:44.105550       1 range_allocator.go:373] Set node old-k8s-version-869040 PodCIDR to [10.244.0.0/24]
	E0401 11:11:44.109673       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	I0401 11:11:44.142918       1 shared_informer.go:247] Caches are synced for daemon sets 
	I0401 11:11:44.159057       1 shared_informer.go:247] Caches are synced for resource quota 
	I0401 11:11:44.166184       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-5v297"
	I0401 11:11:44.178872       1 shared_informer.go:247] Caches are synced for attach detach 
	I0401 11:11:44.204280       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-f74rn"
	I0401 11:11:44.209746       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-xnz2b"
	I0401 11:11:44.233872       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-qf8vw"
	I0401 11:11:44.488484       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	E0401 11:11:44.503899       1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"d5e0fcd7-84a3-4607-8fd5-8e4a297d9832", ResourceVersion:"409", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63847566687, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"k
indnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"docker.io/kindest/kindnetd:v20240202-8f1494ea\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\
":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001366cc0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001366ce0)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001366d00), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001366d20)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4001366d40), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generatio
n:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001366d60), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:
(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001366d80), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlo
ckStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CS
I:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001366da0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Q
uobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"docker.io/kindest/kindnetd:v20240202-8f1494ea", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001366dc0)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001366e00)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i
:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", Sub
Path:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x40016a2120), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x400154c718), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x4000458690), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinit
y:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x400000f648)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x400154c760)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:0, ObservedGeneration:1, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v
1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I0401 11:11:44.688687       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0401 11:11:44.701246       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0401 11:11:44.701273       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0401 11:11:45.476499       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0401 11:11:45.517110       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-5v297"
	I0401 11:11:48.970163       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0401 11:13:49.686091       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	
	
	==> kube-controller-manager [db524e3e456ecefc70df2862e168f564c1c378c87591f924065c027ff5cba833] <==
	E0401 11:16:17.853415       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0401 11:16:23.457126       1 request.go:655] Throttling request took 1.048480994s, request: GET:https://192.168.85.2:8443/apis/coordination.k8s.io/v1beta1?timeout=32s
	W0401 11:16:24.308495       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0401 11:16:48.355388       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0401 11:16:55.958987       1 request.go:655] Throttling request took 1.048495472s, request: GET:https://192.168.85.2:8443/apis/authentication.k8s.io/v1?timeout=32s
	W0401 11:16:56.810587       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0401 11:17:18.858519       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0401 11:17:28.461089       1 request.go:655] Throttling request took 1.047125172s, request: GET:https://192.168.85.2:8443/apis/rbac.authorization.k8s.io/v1beta1?timeout=32s
	W0401 11:17:29.312469       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0401 11:17:49.360272       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0401 11:18:00.962892       1 request.go:655] Throttling request took 1.048472223s, request: GET:https://192.168.85.2:8443/apis/apiextensions.k8s.io/v1beta1?timeout=32s
	W0401 11:18:01.814387       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0401 11:18:19.862093       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0401 11:18:33.464856       1 request.go:655] Throttling request took 1.048506338s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0401 11:18:34.316290       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0401 11:18:50.364300       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0401 11:19:05.966632       1 request.go:655] Throttling request took 1.048312684s, request: GET:https://192.168.85.2:8443/apis/networking.k8s.io/v1?timeout=32s
	W0401 11:19:06.818159       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0401 11:19:20.866307       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0401 11:19:38.468487       1 request.go:655] Throttling request took 1.048418179s, request: GET:https://192.168.85.2:8443/apis/apiextensions.k8s.io/v1?timeout=32s
	W0401 11:19:39.319897       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0401 11:19:51.368066       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0401 11:20:10.970926       1 request.go:655] Throttling request took 1.048224956s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0401 11:20:11.823640       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0401 11:20:21.869911       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	
	==> kube-proxy [c2b2b6c421e068eab7bf881b707513703035897f1d7591fb54073efa33cf4466] <==
	I0401 11:14:29.564677       1 node.go:172] Successfully retrieved node IP: 192.168.85.2
	I0401 11:14:29.564960       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
	W0401 11:14:29.582256       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0401 11:14:29.582555       1 server_others.go:185] Using iptables Proxier.
	I0401 11:14:29.583057       1 server.go:650] Version: v1.20.0
	I0401 11:14:29.583999       1 config.go:315] Starting service config controller
	I0401 11:14:29.592419       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0401 11:14:29.585395       1 config.go:224] Starting endpoint slice config controller
	I0401 11:14:29.592713       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0401 11:14:29.693475       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0401 11:14:29.693542       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-proxy [f9d360e936a135d336c2d41db86ccc367112509974d2c56fcd3fa3d7fb335b9a] <==
	I0401 11:11:45.311915       1 node.go:172] Successfully retrieved node IP: 192.168.85.2
	I0401 11:11:45.312092       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
	W0401 11:11:45.405962       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0401 11:11:45.406046       1 server_others.go:185] Using iptables Proxier.
	I0401 11:11:45.406266       1 server.go:650] Version: v1.20.0
	I0401 11:11:45.406785       1 config.go:315] Starting service config controller
	I0401 11:11:45.406799       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0401 11:11:45.414210       1 config.go:224] Starting endpoint slice config controller
	I0401 11:11:45.414230       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0401 11:11:45.506912       1 shared_informer.go:247] Caches are synced for service config 
	I0401 11:11:45.514334       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-scheduler [12b9a417beed52c0d14a146ade6251330fcf90c20e8666aa518dcf7ad2f7264e] <==
	I0401 11:14:21.367411       1 serving.go:331] Generated self-signed cert in-memory
	W0401 11:14:27.081645       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0401 11:14:27.081752       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0401 11:14:27.081786       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0401 11:14:27.081824       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0401 11:14:27.380414       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0401 11:14:27.383911       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0401 11:14:27.384079       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0401 11:14:27.386240       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0401 11:14:27.587363       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kube-scheduler [723bd4c79f413d52322777fc3b785237740dffca6ca180349b1c3d3f0909a986] <==
	W0401 11:11:24.190640       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0401 11:11:24.190645       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0401 11:11:24.259944       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0401 11:11:24.260843       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0401 11:11:24.260869       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0401 11:11:24.260886       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0401 11:11:24.265043       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0401 11:11:24.265150       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0401 11:11:24.265310       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0401 11:11:24.273784       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0401 11:11:24.278026       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0401 11:11:24.278225       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0401 11:11:24.278396       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0401 11:11:24.278475       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0401 11:11:24.278528       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0401 11:11:24.278582       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0401 11:11:24.278629       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0401 11:11:24.278753       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0401 11:11:25.073277       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0401 11:11:25.169258       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0401 11:11:25.198740       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0401 11:11:25.202357       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0401 11:11:25.379002       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0401 11:11:25.398482       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0401 11:11:27.860999       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Apr 01 11:18:53 old-k8s-version-869040 kubelet[661]: I0401 11:18:53.784720     661 scope.go:95] [topologymanager] RemoveContainer - Container ID: 422ec54d61e42c6c56a11f489f59cfb5bb074a9121b293a317b5625875fb4ecd
	Apr 01 11:18:53 old-k8s-version-869040 kubelet[661]: E0401 11:18:53.785086     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	Apr 01 11:18:54 old-k8s-version-869040 kubelet[661]: E0401 11:18:54.785659     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 01 11:19:04 old-k8s-version-869040 kubelet[661]: I0401 11:19:04.784805     661 scope.go:95] [topologymanager] RemoveContainer - Container ID: 422ec54d61e42c6c56a11f489f59cfb5bb074a9121b293a317b5625875fb4ecd
	Apr 01 11:19:04 old-k8s-version-869040 kubelet[661]: E0401 11:19:04.785200     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	Apr 01 11:19:05 old-k8s-version-869040 kubelet[661]: E0401 11:19:05.785514     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 01 11:19:16 old-k8s-version-869040 kubelet[661]: E0401 11:19:16.789686     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 01 11:19:17 old-k8s-version-869040 kubelet[661]: I0401 11:19:17.784800     661 scope.go:95] [topologymanager] RemoveContainer - Container ID: 422ec54d61e42c6c56a11f489f59cfb5bb074a9121b293a317b5625875fb4ecd
	Apr 01 11:19:17 old-k8s-version-869040 kubelet[661]: E0401 11:19:17.785382     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	Apr 01 11:19:28 old-k8s-version-869040 kubelet[661]: E0401 11:19:28.786172     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 01 11:19:31 old-k8s-version-869040 kubelet[661]: I0401 11:19:31.784846     661 scope.go:95] [topologymanager] RemoveContainer - Container ID: 422ec54d61e42c6c56a11f489f59cfb5bb074a9121b293a317b5625875fb4ecd
	Apr 01 11:19:31 old-k8s-version-869040 kubelet[661]: E0401 11:19:31.785264     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	Apr 01 11:19:43 old-k8s-version-869040 kubelet[661]: E0401 11:19:43.785550     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 01 11:19:46 old-k8s-version-869040 kubelet[661]: I0401 11:19:46.784824     661 scope.go:95] [topologymanager] RemoveContainer - Container ID: 422ec54d61e42c6c56a11f489f59cfb5bb074a9121b293a317b5625875fb4ecd
	Apr 01 11:19:46 old-k8s-version-869040 kubelet[661]: E0401 11:19:46.785335     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	Apr 01 11:19:58 old-k8s-version-869040 kubelet[661]: E0401 11:19:58.787551     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 01 11:20:01 old-k8s-version-869040 kubelet[661]: I0401 11:20:01.784784     661 scope.go:95] [topologymanager] RemoveContainer - Container ID: 422ec54d61e42c6c56a11f489f59cfb5bb074a9121b293a317b5625875fb4ecd
	Apr 01 11:20:01 old-k8s-version-869040 kubelet[661]: E0401 11:20:01.785197     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	Apr 01 11:20:13 old-k8s-version-869040 kubelet[661]: E0401 11:20:13.785486     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 01 11:20:14 old-k8s-version-869040 kubelet[661]: I0401 11:20:14.784721     661 scope.go:95] [topologymanager] RemoveContainer - Container ID: 422ec54d61e42c6c56a11f489f59cfb5bb074a9121b293a317b5625875fb4ecd
	Apr 01 11:20:14 old-k8s-version-869040 kubelet[661]: E0401 11:20:14.785319     661 pod_workers.go:191] Error syncing pod 6792f529-4f15-4ab8-a687-b2736e5958ce ("dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-l8h9p_kubernetes-dashboard(6792f529-4f15-4ab8-a687-b2736e5958ce)"
	Apr 01 11:20:24 old-k8s-version-869040 kubelet[661]: E0401 11:20:24.813085     661 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host
	Apr 01 11:20:24 old-k8s-version-869040 kubelet[661]: E0401 11:20:24.813720     661 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host
	Apr 01 11:20:24 old-k8s-version-869040 kubelet[661]: E0401 11:20:24.819289     661 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-p8whl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exec
:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-hltl7_kube-system(2f2f558
e-0722-4dec-880b-e383db23f3cb): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host
	Apr 01 11:20:24 old-k8s-version-869040 kubelet[661]: E0401 11:20:24.819479     661 pod_workers.go:191] Error syncing pod 2f2f558e-0722-4dec-880b-e383db23f3cb ("metrics-server-9975d5f86-hltl7_kube-system(2f2f558e-0722-4dec-880b-e383db23f3cb)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	
	
	==> kubernetes-dashboard [408a789b3a96ce3c6f7fd9adb8c71566a9f0d114f73314b4be1662ad97d0f023] <==
	2024/04/01 11:14:51 Using namespace: kubernetes-dashboard
	2024/04/01 11:14:51 Using in-cluster config to connect to apiserver
	2024/04/01 11:14:51 Using secret token for csrf signing
	2024/04/01 11:14:51 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/04/01 11:14:51 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/04/01 11:14:51 Successful initial request to the apiserver, version: v1.20.0
	2024/04/01 11:14:51 Generating JWE encryption key
	2024/04/01 11:14:51 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/04/01 11:14:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/04/01 11:14:52 Initializing JWE encryption key from synchronized object
	2024/04/01 11:14:52 Creating in-cluster Sidecar client
	2024/04/01 11:14:52 Serving insecurely on HTTP port: 9090
	2024/04/01 11:14:52 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/04/01 11:15:22 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/04/01 11:15:52 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/04/01 11:16:22 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/04/01 11:16:52 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/04/01 11:17:22 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/04/01 11:17:52 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/04/01 11:18:22 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/04/01 11:18:52 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/04/01 11:19:22 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/04/01 11:19:52 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/04/01 11:20:22 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/04/01 11:14:51 Starting overwatch
	
	
	==> storage-provisioner [27b3a26cd6f52effeb1f6ec35ce6166315c1da769749f083642e75e517cc5ec2] <==
	I0401 11:14:29.813679       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0401 11:14:29.826058       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0401 11:14:29.826266       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0401 11:14:47.279627       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0401 11:14:47.280351       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1d581b17-d9f7-4ef4-bb01-89ee280cf0ad", APIVersion:"v1", ResourceVersion:"775", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-869040_8cb75d9d-4593-4eee-9fae-1ddf702fc42c became leader
	I0401 11:14:47.280454       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-869040_8cb75d9d-4593-4eee-9fae-1ddf702fc42c!
	I0401 11:14:47.380619       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-869040_8cb75d9d-4593-4eee-9fae-1ddf702fc42c!
	
	
	==> storage-provisioner [27fe2520ebb746e49d07fcf1ad334f5c492ba78c197570a874201267d3ffaa3f] <==
	I0401 11:11:46.212741       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0401 11:11:46.229208       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0401 11:11:46.229287       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0401 11:11:46.243499       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0401 11:11:46.244036       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-869040_123ae0a7-40a8-47ab-99fa-269bb279ceb9!
	I0401 11:11:46.243978       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1d581b17-d9f7-4ef4-bb01-89ee280cf0ad", APIVersion:"v1", ResourceVersion:"462", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-869040_123ae0a7-40a8-47ab-99fa-269bb279ceb9 became leader
	I0401 11:11:46.344872       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-869040_123ae0a7-40a8-47ab-99fa-269bb279ceb9!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-869040 -n old-k8s-version-869040
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-869040 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-hltl7
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-869040 describe pod metrics-server-9975d5f86-hltl7
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-869040 describe pod metrics-server-9975d5f86-hltl7: exit status 1 (126.155377ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-9975d5f86-hltl7" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-869040 describe pod metrics-server-9975d5f86-hltl7: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (383.55s)

                                                
                                    

Test pass (297/335)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 7.09
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.09
9 TestDownloadOnly/v1.20.0/DeleteAll 0.2
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.29.3/json-events 6.38
13 TestDownloadOnly/v1.29.3/preload-exists 0
17 TestDownloadOnly/v1.29.3/LogsDuration 0.09
18 TestDownloadOnly/v1.29.3/DeleteAll 0.23
19 TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds 0.14
21 TestDownloadOnly/v1.30.0-beta.0/json-events 6.56
22 TestDownloadOnly/v1.30.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.30.0-beta.0/LogsDuration 0.09
27 TestDownloadOnly/v1.30.0-beta.0/DeleteAll 0.2
28 TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds 0.14
30 TestBinaryMirror 0.6
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.15
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.16
36 TestAddons/Setup 116.77
38 TestAddons/parallel/Registry 14.56
40 TestAddons/parallel/InspektorGadget 11.13
41 TestAddons/parallel/MetricsServer 5.81
44 TestAddons/parallel/CSI 66.89
45 TestAddons/parallel/Headlamp 12.12
46 TestAddons/parallel/CloudSpanner 5.58
47 TestAddons/parallel/LocalPath 52.93
48 TestAddons/parallel/NvidiaDevicePlugin 5.57
49 TestAddons/parallel/Yakd 6.01
52 TestAddons/serial/GCPAuth/Namespaces 0.18
53 TestAddons/StoppedEnableDisable 12.23
54 TestCertOptions 33.81
55 TestCertExpiration 228
57 TestForceSystemdFlag 39.75
58 TestForceSystemdEnv 46.62
59 TestDockerEnvContainerd 49.51
64 TestErrorSpam/setup 30.05
65 TestErrorSpam/start 0.78
66 TestErrorSpam/status 1.01
67 TestErrorSpam/pause 1.71
68 TestErrorSpam/unpause 1.79
69 TestErrorSpam/stop 1.46
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 55.23
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 6.1
76 TestFunctional/serial/KubeContext 0.07
77 TestFunctional/serial/KubectlGetPods 0.1
80 TestFunctional/serial/CacheCmd/cache/add_remote 3.9
81 TestFunctional/serial/CacheCmd/cache/add_local 1.56
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
83 TestFunctional/serial/CacheCmd/cache/list 0.07
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
85 TestFunctional/serial/CacheCmd/cache/cache_reload 2.15
86 TestFunctional/serial/CacheCmd/cache/delete 0.22
87 TestFunctional/serial/MinikubeKubectlCmd 0.17
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
89 TestFunctional/serial/ExtraConfig 42.71
90 TestFunctional/serial/ComponentHealth 0.1
91 TestFunctional/serial/LogsCmd 1.71
92 TestFunctional/serial/LogsFileCmd 1.71
93 TestFunctional/serial/InvalidService 4.87
95 TestFunctional/parallel/ConfigCmd 0.58
96 TestFunctional/parallel/DashboardCmd 8.3
97 TestFunctional/parallel/DryRun 0.51
98 TestFunctional/parallel/InternationalLanguage 0.21
99 TestFunctional/parallel/StatusCmd 1.34
103 TestFunctional/parallel/ServiceCmdConnect 9.73
104 TestFunctional/parallel/AddonsCmd 0.3
105 TestFunctional/parallel/PersistentVolumeClaim 27.4
107 TestFunctional/parallel/SSHCmd 0.7
108 TestFunctional/parallel/CpCmd 2.36
110 TestFunctional/parallel/FileSync 0.3
111 TestFunctional/parallel/CertSync 2.01
115 TestFunctional/parallel/NodeLabels 0.13
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.72
119 TestFunctional/parallel/License 0.32
121 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.66
122 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
124 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.45
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.1
126 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
130 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
131 TestFunctional/parallel/ServiceCmd/DeployApp 7.3
132 TestFunctional/parallel/ServiceCmd/List 0.52
133 TestFunctional/parallel/ServiceCmd/JSONOutput 0.51
134 TestFunctional/parallel/ServiceCmd/HTTPS 0.38
135 TestFunctional/parallel/ServiceCmd/Format 0.46
136 TestFunctional/parallel/ProfileCmd/profile_not_create 0.57
137 TestFunctional/parallel/ServiceCmd/URL 0.48
138 TestFunctional/parallel/MountCmd/any-port 7.87
139 TestFunctional/parallel/ProfileCmd/profile_list 0.64
140 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
141 TestFunctional/parallel/MountCmd/specific-port 2.61
142 TestFunctional/parallel/MountCmd/VerifyCleanup 2.06
143 TestFunctional/parallel/Version/short 0.09
144 TestFunctional/parallel/Version/components 1.31
145 TestFunctional/parallel/ImageCommands/ImageListShort 0.29
146 TestFunctional/parallel/ImageCommands/ImageListTable 0.33
147 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
148 TestFunctional/parallel/ImageCommands/ImageListYaml 0.31
149 TestFunctional/parallel/ImageCommands/ImageBuild 2.63
150 TestFunctional/parallel/ImageCommands/Setup 2.34
151 TestFunctional/parallel/UpdateContextCmd/no_changes 0.21
152 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.24
153 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.24
158 TestFunctional/parallel/ImageCommands/ImageRemove 0.5
160 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.64
161 TestFunctional/delete_addon-resizer_images 0.08
162 TestFunctional/delete_my-image_image 0.01
163 TestFunctional/delete_minikube_cached_images 0.01
167 TestMultiControlPlane/serial/StartCluster 125.32
168 TestMultiControlPlane/serial/DeployApp 30.14
169 TestMultiControlPlane/serial/PingHostFromPods 1.73
170 TestMultiControlPlane/serial/AddWorkerNode 20.55
171 TestMultiControlPlane/serial/NodeLabels 0.11
172 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.86
173 TestMultiControlPlane/serial/CopyFile 20.41
174 TestMultiControlPlane/serial/StopSecondaryNode 12.98
175 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.59
176 TestMultiControlPlane/serial/RestartSecondaryNode 18.04
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.79
178 TestMultiControlPlane/serial/RestartClusterKeepsNodes 158.86
179 TestMultiControlPlane/serial/DeleteSecondaryNode 11.48
180 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.53
181 TestMultiControlPlane/serial/StopCluster 35.97
182 TestMultiControlPlane/serial/RestartCluster 79.04
183 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.58
184 TestMultiControlPlane/serial/AddSecondaryNode 44.01
185 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.79
189 TestJSONOutput/start/Command 56.35
190 TestJSONOutput/start/Audit 0
192 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/pause/Command 0.77
196 TestJSONOutput/pause/Audit 0
198 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/unpause/Command 0.67
202 TestJSONOutput/unpause/Audit 0
204 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
207 TestJSONOutput/stop/Command 5.77
208 TestJSONOutput/stop/Audit 0
210 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
211 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
212 TestErrorJSONOutput 0.24
214 TestKicCustomNetwork/create_custom_network 44.36
215 TestKicCustomNetwork/use_default_bridge_network 37.55
216 TestKicExistingNetwork 33.51
217 TestKicCustomSubnet 36.92
218 TestKicStaticIP 32.97
219 TestMainNoArgs 0.06
220 TestMinikubeProfile 68.5
223 TestMountStart/serial/StartWithMountFirst 6.71
224 TestMountStart/serial/VerifyMountFirst 0.27
225 TestMountStart/serial/StartWithMountSecond 6.25
226 TestMountStart/serial/VerifyMountSecond 0.26
227 TestMountStart/serial/DeleteFirst 1.62
228 TestMountStart/serial/VerifyMountPostDelete 0.27
229 TestMountStart/serial/Stop 1.2
230 TestMountStart/serial/RestartStopped 7.28
231 TestMountStart/serial/VerifyMountPostStop 0.64
234 TestMultiNode/serial/FreshStart2Nodes 79.97
235 TestMultiNode/serial/DeployApp2Nodes 4.53
236 TestMultiNode/serial/PingHostFrom2Pods 1.07
237 TestMultiNode/serial/AddNode 17.14
238 TestMultiNode/serial/MultiNodeLabels 0.14
239 TestMultiNode/serial/ProfileList 0.37
240 TestMultiNode/serial/CopyFile 10.22
241 TestMultiNode/serial/StopNode 2.27
242 TestMultiNode/serial/StartAfterStop 9.2
243 TestMultiNode/serial/RestartKeepsNodes 83.62
244 TestMultiNode/serial/DeleteNode 5.8
245 TestMultiNode/serial/StopMultiNode 24.2
246 TestMultiNode/serial/RestartMultiNode 54.33
247 TestMultiNode/serial/ValidateNameConflict 33.31
252 TestPreload 109.23
254 TestScheduledStopUnix 106.75
257 TestInsufficientStorage 10.16
258 TestRunningBinaryUpgrade 86.66
260 TestKubernetesUpgrade 373.79
261 TestMissingContainerUpgrade 171
263 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
264 TestNoKubernetes/serial/StartWithK8s 40.63
265 TestNoKubernetes/serial/StartWithStopK8s 17.37
266 TestNoKubernetes/serial/Start 8.73
267 TestNoKubernetes/serial/VerifyK8sNotRunning 0.36
268 TestNoKubernetes/serial/ProfileList 1.1
269 TestNoKubernetes/serial/Stop 1.29
270 TestNoKubernetes/serial/StartNoArgs 7.54
271 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.35
272 TestStoppedBinaryUpgrade/Setup 1.19
273 TestStoppedBinaryUpgrade/Upgrade 100.81
274 TestStoppedBinaryUpgrade/MinikubeLogs 1.09
283 TestPause/serial/Start 90.45
284 TestPause/serial/SecondStartNoReconfiguration 7.15
285 TestPause/serial/Pause 1.19
286 TestPause/serial/VerifyStatus 0.41
287 TestPause/serial/Unpause 0.95
288 TestPause/serial/PauseAgain 1.14
289 TestPause/serial/DeletePaused 2.96
290 TestPause/serial/VerifyDeletedResources 0.43
298 TestNetworkPlugins/group/false 5.42
303 TestStartStop/group/old-k8s-version/serial/FirstStart 175.56
305 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 86.14
306 TestStartStop/group/old-k8s-version/serial/DeployApp 9.65
307 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.4
308 TestStartStop/group/old-k8s-version/serial/Stop 12.28
309 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.32
311 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.51
312 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.56
313 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.21
314 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
315 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 266.82
316 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
317 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.14
318 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
319 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.21
321 TestStartStop/group/embed-certs/serial/FirstStart 87.26
322 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
323 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.15
324 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.38
325 TestStartStop/group/old-k8s-version/serial/Pause 3.37
327 TestStartStop/group/no-preload/serial/FirstStart 70.12
328 TestStartStop/group/embed-certs/serial/DeployApp 7.39
329 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.19
330 TestStartStop/group/embed-certs/serial/Stop 12.29
331 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
332 TestStartStop/group/embed-certs/serial/SecondStart 277.7
333 TestStartStop/group/no-preload/serial/DeployApp 8.48
334 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.12
335 TestStartStop/group/no-preload/serial/Stop 12.56
336 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.22
337 TestStartStop/group/no-preload/serial/SecondStart 281.01
338 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
339 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.1
340 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.27
341 TestStartStop/group/embed-certs/serial/Pause 3.2
343 TestStartStop/group/newest-cni/serial/FirstStart 49.87
344 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
345 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.15
346 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.31
347 TestStartStop/group/no-preload/serial/Pause 3.95
348 TestNetworkPlugins/group/auto/Start 93.64
349 TestStartStop/group/newest-cni/serial/DeployApp 0
350 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.33
351 TestStartStop/group/newest-cni/serial/Stop 1.37
352 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.36
353 TestStartStop/group/newest-cni/serial/SecondStart 22.8
354 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
355 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
356 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.37
357 TestStartStop/group/newest-cni/serial/Pause 3.72
358 TestNetworkPlugins/group/kindnet/Start 86.89
359 TestNetworkPlugins/group/auto/KubeletFlags 0.33
360 TestNetworkPlugins/group/auto/NetCatPod 9.32
361 TestNetworkPlugins/group/auto/DNS 0.27
362 TestNetworkPlugins/group/auto/Localhost 0.44
363 TestNetworkPlugins/group/auto/HairPin 0.17
364 TestNetworkPlugins/group/calico/Start 77.08
365 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
366 TestNetworkPlugins/group/kindnet/KubeletFlags 0.43
367 TestNetworkPlugins/group/kindnet/NetCatPod 10.35
368 TestNetworkPlugins/group/kindnet/DNS 0.26
369 TestNetworkPlugins/group/kindnet/Localhost 0.21
370 TestNetworkPlugins/group/kindnet/HairPin 0.22
371 TestNetworkPlugins/group/custom-flannel/Start 66.09
372 TestNetworkPlugins/group/calico/ControllerPod 6.01
373 TestNetworkPlugins/group/calico/KubeletFlags 0.47
374 TestNetworkPlugins/group/calico/NetCatPod 9.4
375 TestNetworkPlugins/group/calico/DNS 0.23
376 TestNetworkPlugins/group/calico/Localhost 0.23
377 TestNetworkPlugins/group/calico/HairPin 0.24
378 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.36
379 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.45
380 TestNetworkPlugins/group/enable-default-cni/Start 96.81
381 TestNetworkPlugins/group/custom-flannel/DNS 0.21
382 TestNetworkPlugins/group/custom-flannel/Localhost 0.29
383 TestNetworkPlugins/group/custom-flannel/HairPin 0.23
384 TestNetworkPlugins/group/flannel/Start 63.67
385 TestNetworkPlugins/group/flannel/ControllerPod 6.01
386 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.32
387 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.26
388 TestNetworkPlugins/group/flannel/KubeletFlags 0.32
389 TestNetworkPlugins/group/flannel/NetCatPod 10.31
390 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
391 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
392 TestNetworkPlugins/group/enable-default-cni/HairPin 0.18
393 TestNetworkPlugins/group/flannel/DNS 0.3
394 TestNetworkPlugins/group/flannel/Localhost 0.26
395 TestNetworkPlugins/group/flannel/HairPin 0.25
396 TestNetworkPlugins/group/bridge/Start 88.65
397 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
398 TestNetworkPlugins/group/bridge/NetCatPod 9.25
399 TestNetworkPlugins/group/bridge/DNS 0.18
400 TestNetworkPlugins/group/bridge/Localhost 0.16
401 TestNetworkPlugins/group/bridge/HairPin 0.15
x
+
TestDownloadOnly/v1.20.0/json-events (7.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-006513 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-006513 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (7.08942765s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (7.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-006513
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-006513: exit status 85 (92.822727ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-006513 | jenkins | v1.33.0-beta.0 | 01 Apr 24 10:27 UTC |          |
	|         | -p download-only-006513        |                      |         |                |                     |          |
	|         | --force --alsologtostderr      |                      |         |                |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |                |                     |          |
	|         | --container-runtime=containerd |                      |         |                |                     |          |
	|         | --driver=docker                |                      |         |                |                     |          |
	|         | --container-runtime=containerd |                      |         |                |                     |          |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/01 10:27:05
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 10:27:05.890255  445759 out.go:291] Setting OutFile to fd 1 ...
	I0401 10:27:05.890398  445759 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 10:27:05.890410  445759 out.go:304] Setting ErrFile to fd 2...
	I0401 10:27:05.890415  445759 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 10:27:05.890724  445759 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18551-440344/.minikube/bin
	W0401 10:27:05.890853  445759 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18551-440344/.minikube/config/config.json: open /home/jenkins/minikube-integration/18551-440344/.minikube/config/config.json: no such file or directory
	I0401 10:27:05.891280  445759 out.go:298] Setting JSON to true
	I0401 10:27:05.892200  445759 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7776,"bootTime":1711959450,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1056-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0401 10:27:05.892277  445759 start.go:139] virtualization:  
	I0401 10:27:05.895444  445759 out.go:97] [download-only-006513] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	I0401 10:27:05.897914  445759 out.go:169] MINIKUBE_LOCATION=18551
	W0401 10:27:05.895663  445759 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/18551-440344/.minikube/cache/preloaded-tarball: no such file or directory
	I0401 10:27:05.895715  445759 notify.go:220] Checking for updates...
	I0401 10:27:05.899740  445759 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 10:27:05.901733  445759 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18551-440344/kubeconfig
	I0401 10:27:05.903675  445759 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18551-440344/.minikube
	I0401 10:27:05.905605  445759 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0401 10:27:05.909347  445759 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0401 10:27:05.909675  445759 driver.go:392] Setting default libvirt URI to qemu:///system
	I0401 10:27:05.929014  445759 docker.go:122] docker version: linux-26.0.0:Docker Engine - Community
	I0401 10:27:05.929156  445759 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 10:27:05.997349  445759 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-04-01 10:27:05.988013411 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215101440 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0401 10:27:05.997457  445759 docker.go:295] overlay module found
	I0401 10:27:05.999475  445759 out.go:97] Using the docker driver based on user configuration
	I0401 10:27:05.999509  445759 start.go:297] selected driver: docker
	I0401 10:27:05.999516  445759 start.go:901] validating driver "docker" against <nil>
	I0401 10:27:05.999663  445759 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 10:27:06.065758  445759 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-04-01 10:27:06.056038842 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215101440 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0401 10:27:06.065938  445759 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0401 10:27:06.066236  445759 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0401 10:27:06.066397  445759 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0401 10:27:06.068925  445759 out.go:169] Using Docker driver with root privileges
	I0401 10:27:06.070968  445759 cni.go:84] Creating CNI manager for ""
	I0401 10:27:06.070996  445759 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0401 10:27:06.071008  445759 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0401 10:27:06.071116  445759 start.go:340] cluster config:
	{Name:download-only-006513 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-006513 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 10:27:06.073233  445759 out.go:97] Starting "download-only-006513" primary control-plane node in "download-only-006513" cluster
	I0401 10:27:06.073276  445759 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0401 10:27:06.075026  445759 out.go:97] Pulling base image v0.0.43-1711559786-18485 ...
	I0401 10:27:06.075059  445759 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0401 10:27:06.075262  445759 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 in local docker daemon
	I0401 10:27:06.088669  445759 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 to local cache
	I0401 10:27:06.088843  445759 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 in local cache directory
	I0401 10:27:06.088945  445759 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 to local cache
	I0401 10:27:06.238316  445759 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0401 10:27:06.238341  445759 cache.go:56] Caching tarball of preloaded images
	I0401 10:27:06.238578  445759 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0401 10:27:06.240888  445759 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0401 10:27:06.240917  445759 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0401 10:27:06.366812  445759 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:7e3d48ccb9f143791669d02e14ce1643 -> /home/jenkins/minikube-integration/18551-440344/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0401 10:27:10.150118  445759 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 as a tarball
	
	
	* The control-plane node download-only-006513 host does not exist
	  To start a cluster, run: "minikube start -p download-only-006513"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-006513
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/json-events (6.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-919109 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-919109 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (6.383143916s)
--- PASS: TestDownloadOnly/v1.29.3/json-events (6.38s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/preload-exists
--- PASS: TestDownloadOnly/v1.29.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-919109
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-919109: exit status 85 (85.762311ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-006513 | jenkins | v1.33.0-beta.0 | 01 Apr 24 10:27 UTC |                     |
	|         | -p download-only-006513        |                      |         |                |                     |                     |
	|         | --force --alsologtostderr      |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |                |                     |                     |
	|         | --container-runtime=containerd |                      |         |                |                     |                     |
	|         | --driver=docker                |                      |         |                |                     |                     |
	|         | --container-runtime=containerd |                      |         |                |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.0-beta.0 | 01 Apr 24 10:27 UTC | 01 Apr 24 10:27 UTC |
	| delete  | -p download-only-006513        | download-only-006513 | jenkins | v1.33.0-beta.0 | 01 Apr 24 10:27 UTC | 01 Apr 24 10:27 UTC |
	| start   | -o=json --download-only        | download-only-919109 | jenkins | v1.33.0-beta.0 | 01 Apr 24 10:27 UTC |                     |
	|         | -p download-only-919109        |                      |         |                |                     |                     |
	|         | --force --alsologtostderr      |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3   |                      |         |                |                     |                     |
	|         | --container-runtime=containerd |                      |         |                |                     |                     |
	|         | --driver=docker                |                      |         |                |                     |                     |
	|         | --container-runtime=containerd |                      |         |                |                     |                     |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/01 10:27:13
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 10:27:13.409315  445927 out.go:291] Setting OutFile to fd 1 ...
	I0401 10:27:13.409538  445927 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 10:27:13.409565  445927 out.go:304] Setting ErrFile to fd 2...
	I0401 10:27:13.409585  445927 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 10:27:13.409864  445927 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18551-440344/.minikube/bin
	I0401 10:27:13.410294  445927 out.go:298] Setting JSON to true
	I0401 10:27:13.411200  445927 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7784,"bootTime":1711959450,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1056-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0401 10:27:13.411295  445927 start.go:139] virtualization:  
	I0401 10:27:13.413890  445927 out.go:97] [download-only-919109] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	I0401 10:27:13.416072  445927 out.go:169] MINIKUBE_LOCATION=18551
	I0401 10:27:13.414192  445927 notify.go:220] Checking for updates...
	I0401 10:27:13.420888  445927 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 10:27:13.422913  445927 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18551-440344/kubeconfig
	I0401 10:27:13.424760  445927 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18551-440344/.minikube
	I0401 10:27:13.426883  445927 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0401 10:27:13.430485  445927 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0401 10:27:13.430748  445927 driver.go:392] Setting default libvirt URI to qemu:///system
	I0401 10:27:13.450664  445927 docker.go:122] docker version: linux-26.0.0:Docker Engine - Community
	I0401 10:27:13.450776  445927 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 10:27:13.516747  445927 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:46 SystemTime:2024-04-01 10:27:13.507090346 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215101440 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0401 10:27:13.516871  445927 docker.go:295] overlay module found
	I0401 10:27:13.519355  445927 out.go:97] Using the docker driver based on user configuration
	I0401 10:27:13.519404  445927 start.go:297] selected driver: docker
	I0401 10:27:13.519426  445927 start.go:901] validating driver "docker" against <nil>
	I0401 10:27:13.519539  445927 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 10:27:13.574940  445927 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:46 SystemTime:2024-04-01 10:27:13.565700665 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215101440 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0401 10:27:13.575114  445927 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0401 10:27:13.575442  445927 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0401 10:27:13.575638  445927 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0401 10:27:13.577678  445927 out.go:169] Using Docker driver with root privileges
	I0401 10:27:13.579455  445927 cni.go:84] Creating CNI manager for ""
	I0401 10:27:13.579476  445927 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0401 10:27:13.579487  445927 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0401 10:27:13.579576  445927 start.go:340] cluster config:
	{Name:download-only-919109 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:download-only-919109 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 10:27:13.581542  445927 out.go:97] Starting "download-only-919109" primary control-plane node in "download-only-919109" cluster
	I0401 10:27:13.581567  445927 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0401 10:27:13.583479  445927 out.go:97] Pulling base image v0.0.43-1711559786-18485 ...
	I0401 10:27:13.583506  445927 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime containerd
	I0401 10:27:13.583686  445927 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 in local docker daemon
	I0401 10:27:13.596546  445927 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 to local cache
	I0401 10:27:13.596675  445927 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 in local cache directory
	I0401 10:27:13.596698  445927 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 in local cache directory, skipping pull
	I0401 10:27:13.596704  445927 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 exists in cache, skipping pull
	I0401 10:27:13.596712  445927 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 as a tarball
	I0401 10:27:13.653775  445927 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.3/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-arm64.tar.lz4
	I0401 10:27:13.653800  445927 cache.go:56] Caching tarball of preloaded images
	I0401 10:27:13.653969  445927 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime containerd
	I0401 10:27:13.656312  445927 out.go:97] Downloading Kubernetes v1.29.3 preload ...
	I0401 10:27:13.656349  445927 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-arm64.tar.lz4 ...
	I0401 10:27:13.763852  445927 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.3/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-arm64.tar.lz4?checksum=md5:663a9a795decbfebeb48b89f3f24d179 -> /home/jenkins/minikube-integration/18551-440344/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-arm64.tar.lz4
	I0401 10:27:18.125926  445927 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-arm64.tar.lz4 ...
	I0401 10:27:18.126085  445927 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18551-440344/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-containerd-overlay2-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-919109 host does not exist
	  To start a cluster, run: "minikube start -p download-only-919109"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.3/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.29.3/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-919109
--- PASS: TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/json-events (6.56s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-315775 --force --alsologtostderr --kubernetes-version=v1.30.0-beta.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-315775 --force --alsologtostderr --kubernetes-version=v1.30.0-beta.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (6.557141869s)
--- PASS: TestDownloadOnly/v1.30.0-beta.0/json-events (6.56s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.30.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-315775
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-315775: exit status 85 (93.065167ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-006513 | jenkins | v1.33.0-beta.0 | 01 Apr 24 10:27 UTC |                     |
	|         | -p download-only-006513             |                      |         |                |                     |                     |
	|         | --force --alsologtostderr           |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |                |                     |                     |
	|         | --container-runtime=containerd      |                      |         |                |                     |                     |
	|         | --driver=docker                     |                      |         |                |                     |                     |
	|         | --container-runtime=containerd      |                      |         |                |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.0-beta.0 | 01 Apr 24 10:27 UTC | 01 Apr 24 10:27 UTC |
	| delete  | -p download-only-006513             | download-only-006513 | jenkins | v1.33.0-beta.0 | 01 Apr 24 10:27 UTC | 01 Apr 24 10:27 UTC |
	| start   | -o=json --download-only             | download-only-919109 | jenkins | v1.33.0-beta.0 | 01 Apr 24 10:27 UTC |                     |
	|         | -p download-only-919109             |                      |         |                |                     |                     |
	|         | --force --alsologtostderr           |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3        |                      |         |                |                     |                     |
	|         | --container-runtime=containerd      |                      |         |                |                     |                     |
	|         | --driver=docker                     |                      |         |                |                     |                     |
	|         | --container-runtime=containerd      |                      |         |                |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.0-beta.0 | 01 Apr 24 10:27 UTC | 01 Apr 24 10:27 UTC |
	| delete  | -p download-only-919109             | download-only-919109 | jenkins | v1.33.0-beta.0 | 01 Apr 24 10:27 UTC | 01 Apr 24 10:27 UTC |
	| start   | -o=json --download-only             | download-only-315775 | jenkins | v1.33.0-beta.0 | 01 Apr 24 10:27 UTC |                     |
	|         | -p download-only-315775             |                      |         |                |                     |                     |
	|         | --force --alsologtostderr           |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0 |                      |         |                |                     |                     |
	|         | --container-runtime=containerd      |                      |         |                |                     |                     |
	|         | --driver=docker                     |                      |         |                |                     |                     |
	|         | --container-runtime=containerd      |                      |         |                |                     |                     |
	|---------|-------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/01 10:27:20
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 10:27:20.246204  446093 out.go:291] Setting OutFile to fd 1 ...
	I0401 10:27:20.246313  446093 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 10:27:20.246321  446093 out.go:304] Setting ErrFile to fd 2...
	I0401 10:27:20.246327  446093 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 10:27:20.246570  446093 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18551-440344/.minikube/bin
	I0401 10:27:20.246948  446093 out.go:298] Setting JSON to true
	I0401 10:27:20.247788  446093 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7791,"bootTime":1711959450,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1056-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0401 10:27:20.247856  446093 start.go:139] virtualization:  
	I0401 10:27:20.250245  446093 out.go:97] [download-only-315775] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	I0401 10:27:20.252347  446093 out.go:169] MINIKUBE_LOCATION=18551
	I0401 10:27:20.250463  446093 notify.go:220] Checking for updates...
	I0401 10:27:20.256574  446093 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 10:27:20.258818  446093 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18551-440344/kubeconfig
	I0401 10:27:20.260990  446093 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18551-440344/.minikube
	I0401 10:27:20.262765  446093 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0401 10:27:20.266599  446093 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0401 10:27:20.266868  446093 driver.go:392] Setting default libvirt URI to qemu:///system
	I0401 10:27:20.286915  446093 docker.go:122] docker version: linux-26.0.0:Docker Engine - Community
	I0401 10:27:20.287030  446093 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 10:27:20.351227  446093 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-04-01 10:27:20.340680676 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215101440 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0401 10:27:20.351345  446093 docker.go:295] overlay module found
	I0401 10:27:20.353689  446093 out.go:97] Using the docker driver based on user configuration
	I0401 10:27:20.353731  446093 start.go:297] selected driver: docker
	I0401 10:27:20.353738  446093 start.go:901] validating driver "docker" against <nil>
	I0401 10:27:20.353855  446093 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 10:27:20.411118  446093 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-04-01 10:27:20.402454099 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215101440 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0401 10:27:20.411286  446093 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0401 10:27:20.411582  446093 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0401 10:27:20.411787  446093 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0401 10:27:20.414037  446093 out.go:169] Using Docker driver with root privileges
	I0401 10:27:20.416074  446093 cni.go:84] Creating CNI manager for ""
	I0401 10:27:20.416094  446093 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0401 10:27:20.416115  446093 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0401 10:27:20.416217  446093 start.go:340] cluster config:
	{Name:download-only-315775 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-beta.0 ClusterName:download-only-315775 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval
:1m0s}
	I0401 10:27:20.418485  446093 out.go:97] Starting "download-only-315775" primary control-plane node in "download-only-315775" cluster
	I0401 10:27:20.418510  446093 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0401 10:27:20.420417  446093 out.go:97] Pulling base image v0.0.43-1711559786-18485 ...
	I0401 10:27:20.420457  446093 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime containerd
	I0401 10:27:20.420567  446093 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 in local docker daemon
	I0401 10:27:20.434669  446093 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 to local cache
	I0401 10:27:20.434818  446093 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 in local cache directory
	I0401 10:27:20.434837  446093 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 in local cache directory, skipping pull
	I0401 10:27:20.434842  446093 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 exists in cache, skipping pull
	I0401 10:27:20.434851  446093 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 as a tarball
	I0401 10:27:20.480029  446093 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-beta.0/preloaded-images-k8s-v18-v1.30.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I0401 10:27:20.480068  446093 cache.go:56] Caching tarball of preloaded images
	I0401 10:27:20.480235  446093 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime containerd
	I0401 10:27:20.482715  446093 out.go:97] Downloading Kubernetes v1.30.0-beta.0 preload ...
	I0401 10:27:20.482739  446093 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-beta.0-containerd-overlay2-arm64.tar.lz4 ...
	I0401 10:27:20.594256  446093 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-beta.0/preloaded-images-k8s-v18-v1.30.0-beta.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:f676343275e1172ac594af64d6d0592a -> /home/jenkins/minikube-integration/18551-440344/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I0401 10:27:25.129437  446093 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.30.0-beta.0-containerd-overlay2-arm64.tar.lz4 ...
	I0401 10:27:25.129543  446093 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18551-440344/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-beta.0-containerd-overlay2-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-315775 host does not exist
	  To start a cluster, run: "minikube start -p download-only-315775"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0-beta.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.0-beta.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-315775
--- PASS: TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-553170 --alsologtostderr --binary-mirror http://127.0.0.1:34313 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-553170" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-553170
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.15s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-126557
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-126557: exit status 85 (152.394714ms)

                                                
                                                
-- stdout --
	* Profile "addons-126557" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-126557"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.15s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.16s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-126557
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-126557: exit status 85 (162.416187ms)

                                                
                                                
-- stdout --
	* Profile "addons-126557" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-126557"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.16s)

                                                
                                    
x
+
TestAddons/Setup (116.77s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-arm64 start -p addons-126557 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Done: out/minikube-linux-arm64 start -p addons-126557 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns: (1m56.76906s)
--- PASS: TestAddons/Setup (116.77s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.56s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 43.635435ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-ml2g2" [bbaae877-1e11-468f-888a-9776609aa128] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.00484963s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-4wwdd" [c2545582-2c57-437a-b39b-294cb4c20eaf] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004778482s
addons_test.go:340: (dbg) Run:  kubectl --context addons-126557 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-126557 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-126557 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.359730773s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-arm64 -p addons-126557 ip
2024/04/01 10:29:39 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-arm64 -p addons-126557 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.56s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.13s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-x7fr8" [8596e2f9-6b84-460b-850e-2503fb1c7d07] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004769159s
addons_test.go:841: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-126557
addons_test.go:841: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-126557: (6.121270361s)
--- PASS: TestAddons/parallel/InspektorGadget (11.13s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.81s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 7.376865ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-75d6c48ddd-plv9h" [15419844-7761-41fd-90ac-f12c5fcd0fcd] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004905464s
addons_test.go:415: (dbg) Run:  kubectl --context addons-126557 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-arm64 -p addons-126557 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.81s)

                                                
                                    
x
+
TestAddons/parallel/CSI (66.89s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 44.071165ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-126557 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-126557 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-126557 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-126557 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-126557 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-126557 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-126557 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-126557 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-126557 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-126557 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-126557 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-126557 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-126557 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-126557 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-126557 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-126557 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-126557 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-126557 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-126557 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-126557 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-126557 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-126557 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-126557 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-126557 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-126557 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-126557 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-126557 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-126557 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-126557 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-126557 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-126557 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-126557 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-126557 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-126557 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-126557 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-126557 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-126557 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-126557 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [2fca5db7-4e71-45fe-b786-dfb9f0d7fa80] Pending
helpers_test.go:344: "task-pv-pod" [2fca5db7-4e71-45fe-b786-dfb9f0d7fa80] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [2fca5db7-4e71-45fe-b786-dfb9f0d7fa80] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.00352823s
addons_test.go:584: (dbg) Run:  kubectl --context addons-126557 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-126557 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-126557 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-126557 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-126557 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-126557 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-126557 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-126557 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-126557 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-126557 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-126557 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-126557 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-126557 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [42262138-3138-42a9-8fee-aec684498f57] Pending
helpers_test.go:344: "task-pv-pod-restore" [42262138-3138-42a9-8fee-aec684498f57] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [42262138-3138-42a9-8fee-aec684498f57] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004968933s
addons_test.go:626: (dbg) Run:  kubectl --context addons-126557 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-126557 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-126557 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-arm64 -p addons-126557 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-arm64 -p addons-126557 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.768117189s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-arm64 -p addons-126557 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (66.89s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.12s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-126557 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-126557 --alsologtostderr -v=1: (1.116536744s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5b77dbd7c4-nl4s4" [8deff347-38e1-4f70-b69d-cf4ded92a113] Pending
helpers_test.go:344: "headlamp-5b77dbd7c4-nl4s4" [8deff347-38e1-4f70-b69d-cf4ded92a113] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5b77dbd7c4-nl4s4" [8deff347-38e1-4f70-b69d-cf4ded92a113] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003659765s
--- PASS: TestAddons/parallel/Headlamp (12.12s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.58s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5446596998-479zg" [f28f6363-7f9d-499d-976c-127785e7941a] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004423209s
addons_test.go:860: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-126557
--- PASS: TestAddons/parallel/CloudSpanner (5.58s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.93s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-126557 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-126557 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-126557 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-126557 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-126557 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-126557 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-126557 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [d3b21bd1-9591-4f15-921c-246bc30f5a55] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [d3b21bd1-9591-4f15-921c-246bc30f5a55] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [d3b21bd1-9591-4f15-921c-246bc30f5a55] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.020449834s
addons_test.go:891: (dbg) Run:  kubectl --context addons-126557 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-arm64 -p addons-126557 ssh "cat /opt/local-path-provisioner/pvc-e946bd4c-0d39-436e-a133-57feb23c806a_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-126557 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-126557 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-arm64 -p addons-126557 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-arm64 -p addons-126557 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.443482238s)
--- PASS: TestAddons/parallel/LocalPath (52.93s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.57s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-42fcm" [ac4fd004-08f3-4874-9487-b879518c709f] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004135655s
addons_test.go:955: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-126557
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.57s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-f7x77" [587cfb39-6c81-46bc-90ec-91a59246f233] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004143491s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-126557 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-126557 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.23s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-126557
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-126557: (11.923544338s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-126557
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-126557
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-126557
--- PASS: TestAddons/StoppedEnableDisable (12.23s)

                                                
                                    
x
+
TestCertOptions (33.81s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-677057 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-677057 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (31.143757674s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-677057 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-677057 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-677057 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-677057" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-677057
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-677057: (1.984161604s)
--- PASS: TestCertOptions (33.81s)

                                                
                                    
x
+
TestCertExpiration (228s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-152372 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-152372 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (38.306952119s)
E0401 11:10:38.656149  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/functional-805196/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-152372 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-152372 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (7.359710537s)
helpers_test.go:175: Cleaning up "cert-expiration-152372" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-152372
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-152372: (2.329673813s)
--- PASS: TestCertExpiration (228.00s)

                                                
                                    
x
+
TestForceSystemdFlag (39.75s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-180830 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-180830 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (37.121441309s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-180830 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-180830" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-180830
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-180830: (2.319369377s)
--- PASS: TestForceSystemdFlag (39.75s)

                                                
                                    
x
+
TestForceSystemdEnv (46.62s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-739457 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E0401 11:09:25.742006  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/addons-126557/client.crt: no such file or directory
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-739457 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (43.99079662s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-739457 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-739457" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-739457
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-739457: (2.228649509s)
--- PASS: TestForceSystemdEnv (46.62s)

                                                
                                    
x
+
TestDockerEnvContainerd (49.51s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-398799 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-398799 --driver=docker  --container-runtime=containerd: (33.074165556s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-398799"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-398799": (1.340331796s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-60z3yp8XM0Xq/agent.462852" SSH_AGENT_PID="462863" DOCKER_HOST=ssh://docker@127.0.0.1:33172 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-60z3yp8XM0Xq/agent.462852" SSH_AGENT_PID="462863" DOCKER_HOST=ssh://docker@127.0.0.1:33172 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-60z3yp8XM0Xq/agent.462852" SSH_AGENT_PID="462863" DOCKER_HOST=ssh://docker@127.0.0.1:33172 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.639153992s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-60z3yp8XM0Xq/agent.462852" SSH_AGENT_PID="462863" DOCKER_HOST=ssh://docker@127.0.0.1:33172 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-398799" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-398799
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-398799: (2.121032521s)
--- PASS: TestDockerEnvContainerd (49.51s)

                                                
                                    
x
+
TestErrorSpam/setup (30.05s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-193593 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-193593 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-193593 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-193593 --driver=docker  --container-runtime=containerd: (30.047386579s)
--- PASS: TestErrorSpam/setup (30.05s)

                                                
                                    
x
+
TestErrorSpam/start (0.78s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-193593 --log_dir /tmp/nospam-193593 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-193593 --log_dir /tmp/nospam-193593 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-193593 --log_dir /tmp/nospam-193593 start --dry-run
--- PASS: TestErrorSpam/start (0.78s)

                                                
                                    
x
+
TestErrorSpam/status (1.01s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-193593 --log_dir /tmp/nospam-193593 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-193593 --log_dir /tmp/nospam-193593 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-193593 --log_dir /tmp/nospam-193593 status
--- PASS: TestErrorSpam/status (1.01s)

                                                
                                    
x
+
TestErrorSpam/pause (1.71s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-193593 --log_dir /tmp/nospam-193593 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-193593 --log_dir /tmp/nospam-193593 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-193593 --log_dir /tmp/nospam-193593 pause
--- PASS: TestErrorSpam/pause (1.71s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.79s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-193593 --log_dir /tmp/nospam-193593 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-193593 --log_dir /tmp/nospam-193593 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-193593 --log_dir /tmp/nospam-193593 unpause
--- PASS: TestErrorSpam/unpause (1.79s)

                                                
                                    
x
+
TestErrorSpam/stop (1.46s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-193593 --log_dir /tmp/nospam-193593 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-193593 --log_dir /tmp/nospam-193593 stop: (1.234210609s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-193593 --log_dir /tmp/nospam-193593 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-193593 --log_dir /tmp/nospam-193593 stop
--- PASS: TestErrorSpam/stop (1.46s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18551-440344/.minikube/files/etc/test/nested/copy/445754/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (55.23s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-805196 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E0401 10:34:25.745795  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/addons-126557/client.crt: no such file or directory
E0401 10:34:25.752013  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/addons-126557/client.crt: no such file or directory
E0401 10:34:25.762291  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/addons-126557/client.crt: no such file or directory
E0401 10:34:25.782631  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/addons-126557/client.crt: no such file or directory
E0401 10:34:25.823100  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/addons-126557/client.crt: no such file or directory
E0401 10:34:25.903413  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/addons-126557/client.crt: no such file or directory
E0401 10:34:26.063882  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/addons-126557/client.crt: no such file or directory
E0401 10:34:26.384478  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/addons-126557/client.crt: no such file or directory
E0401 10:34:27.025477  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/addons-126557/client.crt: no such file or directory
E0401 10:34:28.305711  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/addons-126557/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-805196 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (55.232626472s)
--- PASS: TestFunctional/serial/StartWithProxy (55.23s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.1s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-805196 --alsologtostderr -v=8
E0401 10:34:30.866831  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/addons-126557/client.crt: no such file or directory
E0401 10:34:35.987028  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/addons-126557/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-805196 --alsologtostderr -v=8: (6.098751191s)
functional_test.go:659: soft start took 6.103286031s for "functional-805196" cluster.
--- PASS: TestFunctional/serial/SoftStart (6.10s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-805196 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.9s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-805196 cache add registry.k8s.io/pause:3.1: (1.418648317s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-805196 cache add registry.k8s.io/pause:3.3: (1.256800618s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-805196 cache add registry.k8s.io/pause:latest: (1.21956636s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.90s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.56s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-805196 /tmp/TestFunctionalserialCacheCmdcacheadd_local89087509/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 cache add minikube-local-cache-test:functional-805196
functional_test.go:1085: (dbg) Done: out/minikube-linux-arm64 -p functional-805196 cache add minikube-local-cache-test:functional-805196: (1.00015308s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 cache delete minikube-local-cache-test:functional-805196
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-805196
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.56s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-805196 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (293.275269ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-805196 cache reload: (1.154585237s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.15s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.22s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 kubectl -- --context functional-805196 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.17s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-805196 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (42.71s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-805196 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0401 10:34:46.227383  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/addons-126557/client.crt: no such file or directory
E0401 10:35:06.707868  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/addons-126557/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-805196 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (42.710619954s)
functional_test.go:757: restart took 42.710740221s for "functional-805196" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (42.71s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-805196 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.71s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-805196 logs: (1.714659416s)
--- PASS: TestFunctional/serial/LogsCmd (1.71s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.71s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 logs --file /tmp/TestFunctionalserialLogsFileCmd2202990016/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-805196 logs --file /tmp/TestFunctionalserialLogsFileCmd2202990016/001/logs.txt: (1.710510696s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.71s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.87s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-805196 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-805196
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-805196: exit status 115 (446.925359ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30956 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-805196 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-805196 delete -f testdata/invalidsvc.yaml: (1.173047477s)
--- PASS: TestFunctional/serial/InvalidService (4.87s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-805196 config get cpus: exit status 14 (109.788049ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-805196 config get cpus: exit status 14 (97.768123ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-805196 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-805196 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 477194: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.30s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-805196 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-805196 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (215.985953ms)

                                                
                                                
-- stdout --
	* [functional-805196] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18551
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18551-440344/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18551-440344/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0401 10:36:09.720404  476853 out.go:291] Setting OutFile to fd 1 ...
	I0401 10:36:09.720612  476853 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 10:36:09.720642  476853 out.go:304] Setting ErrFile to fd 2...
	I0401 10:36:09.720667  476853 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 10:36:09.720912  476853 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18551-440344/.minikube/bin
	I0401 10:36:09.721313  476853 out.go:298] Setting JSON to false
	I0401 10:36:09.722286  476853 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":8320,"bootTime":1711959450,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1056-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0401 10:36:09.722383  476853 start.go:139] virtualization:  
	I0401 10:36:09.725403  476853 out.go:177] * [functional-805196] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	I0401 10:36:09.728171  476853 out.go:177]   - MINIKUBE_LOCATION=18551
	I0401 10:36:09.730171  476853 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 10:36:09.728239  476853 notify.go:220] Checking for updates...
	I0401 10:36:09.734188  476853 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18551-440344/kubeconfig
	I0401 10:36:09.736054  476853 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18551-440344/.minikube
	I0401 10:36:09.738340  476853 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0401 10:36:09.740192  476853 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 10:36:09.742481  476853 config.go:182] Loaded profile config "functional-805196": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0401 10:36:09.742963  476853 driver.go:392] Setting default libvirt URI to qemu:///system
	I0401 10:36:09.761586  476853 docker.go:122] docker version: linux-26.0.0:Docker Engine - Community
	I0401 10:36:09.761713  476853 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 10:36:09.848167  476853 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:53 SystemTime:2024-04-01 10:36:09.832805972 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215101440 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0401 10:36:09.848274  476853 docker.go:295] overlay module found
	I0401 10:36:09.851164  476853 out.go:177] * Using the docker driver based on existing profile
	I0401 10:36:09.853191  476853 start.go:297] selected driver: docker
	I0401 10:36:09.853207  476853 start.go:901] validating driver "docker" against &{Name:functional-805196 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functional-805196 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 10:36:09.853315  476853 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 10:36:09.855484  476853 out.go:177] 
	W0401 10:36:09.857584  476853 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0401 10:36:09.859614  476853 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-805196 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-805196 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-805196 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (212.48799ms)

                                                
                                                
-- stdout --
	* [functional-805196] minikube v1.33.0-beta.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18551
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18551-440344/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18551-440344/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0401 10:36:09.523040  476813 out.go:291] Setting OutFile to fd 1 ...
	I0401 10:36:09.523200  476813 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 10:36:09.523211  476813 out.go:304] Setting ErrFile to fd 2...
	I0401 10:36:09.523217  476813 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 10:36:09.524434  476813 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18551-440344/.minikube/bin
	I0401 10:36:09.524826  476813 out.go:298] Setting JSON to false
	I0401 10:36:09.525883  476813 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":8320,"bootTime":1711959450,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1056-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0401 10:36:09.525956  476813 start.go:139] virtualization:  
	I0401 10:36:09.528478  476813 out.go:177] * [functional-805196] minikube v1.33.0-beta.0 sur Ubuntu 20.04 (arm64)
	I0401 10:36:09.530835  476813 out.go:177]   - MINIKUBE_LOCATION=18551
	I0401 10:36:09.532480  476813 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 10:36:09.530905  476813 notify.go:220] Checking for updates...
	I0401 10:36:09.536023  476813 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18551-440344/kubeconfig
	I0401 10:36:09.538181  476813 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18551-440344/.minikube
	I0401 10:36:09.539903  476813 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0401 10:36:09.541454  476813 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 10:36:09.544223  476813 config.go:182] Loaded profile config "functional-805196": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0401 10:36:09.544739  476813 driver.go:392] Setting default libvirt URI to qemu:///system
	I0401 10:36:09.567077  476813 docker.go:122] docker version: linux-26.0.0:Docker Engine - Community
	I0401 10:36:09.567207  476813 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 10:36:09.633030  476813 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:53 SystemTime:2024-04-01 10:36:09.623560487 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215101440 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0401 10:36:09.633173  476813 docker.go:295] overlay module found
	I0401 10:36:09.635381  476813 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0401 10:36:09.637261  476813 start.go:297] selected driver: docker
	I0401 10:36:09.637279  476813 start.go:901] validating driver "docker" against &{Name:functional-805196 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functional-805196 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 10:36:09.637391  476813 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 10:36:09.640003  476813 out.go:177] 
	W0401 10:36:09.642094  476813 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0401 10:36:09.644052  476813 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-805196 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
E0401 10:35:47.668600  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/addons-126557/client.crt: no such file or directory
functional_test.go:1631: (dbg) Run:  kubectl --context functional-805196 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-fwpx5" [87c692bc-c4c1-4c12-8fc5-2841df9b2093] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-fwpx5" [87c692bc-c4c1-4c12-8fc5-2841df9b2093] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.005023927s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.49.2:31147
functional_test.go:1671: http://192.168.49.2:31147: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-7799dfb7c6-fwpx5

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31147
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.73s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [d79324d4-4cb6-4ba0-881b-a1fc3ad44ff9] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.038667809s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-805196 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-805196 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-805196 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-805196 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [dc7f63aa-2197-43f8-8ace-68897c6c9426] Pending
helpers_test.go:344: "sp-pod" [dc7f63aa-2197-43f8-8ace-68897c6c9426] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [dc7f63aa-2197-43f8-8ace-68897c6c9426] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.003398999s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-805196 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-805196 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-805196 delete -f testdata/storage-provisioner/pod.yaml: (1.255775247s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-805196 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [92453100-b94f-4a3c-b34f-4046a655cd4b] Pending
helpers_test.go:344: "sp-pod" [92453100-b94f-4a3c-b34f-4046a655cd4b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [92453100-b94f-4a3c-b34f-4046a655cd4b] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.004264634s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-805196 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.40s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 ssh -n functional-805196 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 cp functional-805196:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2776287035/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 ssh -n functional-805196 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 ssh -n functional-805196 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.36s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/445754/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 ssh "sudo cat /etc/test/nested/copy/445754/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/445754.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 ssh "sudo cat /etc/ssl/certs/445754.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/445754.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 ssh "sudo cat /usr/share/ca-certificates/445754.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/4457542.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 ssh "sudo cat /etc/ssl/certs/4457542.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/4457542.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 ssh "sudo cat /usr/share/ca-certificates/4457542.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.01s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-805196 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-805196 ssh "sudo systemctl is-active docker": exit status 1 (364.87031ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-805196 ssh "sudo systemctl is-active crio": exit status 1 (351.097917ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-805196 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-805196 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-805196 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-805196 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 474548: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-805196 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-805196 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [77529485-b0a1-424b-adf7-268bd091ff6c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [77529485-b0a1-424b-adf7-268bd091ff6c] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.007543726s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.45s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-805196 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.96.176.156 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-805196 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-805196 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-805196 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-gxwst" [94e37fd3-88d1-4268-a52b-9f0dab4bc45f] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-gxwst" [94e37fd3-88d1-4268-a52b-9f0dab4bc45f] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.004572614s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 service list -o json
functional_test.go:1490: Took "510.633012ms" to run "out/minikube-linux-arm64 -p functional-805196 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.49.2:30818
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.49.2:30818
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-805196 /tmp/TestFunctionalparallelMountCmdany-port3623120810/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1711967767001826106" to /tmp/TestFunctionalparallelMountCmdany-port3623120810/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1711967767001826106" to /tmp/TestFunctionalparallelMountCmdany-port3623120810/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1711967767001826106" to /tmp/TestFunctionalparallelMountCmdany-port3623120810/001/test-1711967767001826106
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-805196 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (453.162891ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Apr  1 10:36 created-by-test
-rw-r--r-- 1 docker docker 24 Apr  1 10:36 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Apr  1 10:36 test-1711967767001826106
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 ssh cat /mount-9p/test-1711967767001826106
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-805196 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [d4401a33-ee9a-4997-93f2-a1d3c682e93e] Pending
helpers_test.go:344: "busybox-mount" [d4401a33-ee9a-4997-93f2-a1d3c682e93e] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [d4401a33-ee9a-4997-93f2-a1d3c682e93e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [d4401a33-ee9a-4997-93f2-a1d3c682e93e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.004919509s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-805196 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-805196 /tmp/TestFunctionalparallelMountCmdany-port3623120810/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.87s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1311: Took "555.790807ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1325: Took "85.620739ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1362: Took "338.930621ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1375: Took "70.099703ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-805196 /tmp/TestFunctionalparallelMountCmdspecific-port4269749799/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-805196 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (550.928168ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-805196 /tmp/TestFunctionalparallelMountCmdspecific-port4269749799/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-805196 ssh "sudo umount -f /mount-9p": exit status 1 (355.537267ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-805196 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-805196 /tmp/TestFunctionalparallelMountCmdspecific-port4269749799/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.61s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-805196 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2276708914/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-805196 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2276708914/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-805196 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2276708914/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-805196 ssh "findmnt -T" /mount1: exit status 1 (638.115159ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
2024/04/01 10:36:18 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-805196 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-805196 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2276708914/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-805196 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2276708914/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-805196 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2276708914/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-arm64 -p functional-805196 version -o=json --components: (1.30648869s)
--- PASS: TestFunctional/parallel/Version/components (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-805196 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.29.3
registry.k8s.io/kube-proxy:v1.29.3
registry.k8s.io/kube-controller-manager:v1.29.3
registry.k8s.io/kube-apiserver:v1.29.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-805196
docker.io/kindest/kindnetd:v20240202-8f1494ea
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-805196 image ls --format short --alsologtostderr:
I0401 10:36:37.493391  479425 out.go:291] Setting OutFile to fd 1 ...
I0401 10:36:37.493664  479425 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0401 10:36:37.493713  479425 out.go:304] Setting ErrFile to fd 2...
I0401 10:36:37.493739  479425 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0401 10:36:37.494042  479425 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18551-440344/.minikube/bin
I0401 10:36:37.494936  479425 config.go:182] Loaded profile config "functional-805196": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0401 10:36:37.495196  479425 config.go:182] Loaded profile config "functional-805196": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0401 10:36:37.495955  479425 cli_runner.go:164] Run: docker container inspect functional-805196 --format={{.State.Status}}
I0401 10:36:37.513797  479425 ssh_runner.go:195] Run: systemctl --version
I0401 10:36:37.513848  479425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-805196
I0401 10:36:37.538591  479425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/18551-440344/.minikube/machines/functional-805196/id_rsa Username:docker}
I0401 10:36:37.641451  479425 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-805196 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-controller-manager     | v1.29.3            | sha256:121d70 | 30.6MB |
| registry.k8s.io/kube-scheduler              | v1.29.3            | sha256:4b51f9 | 16.9MB |
| registry.k8s.io/pause                       | 3.9                | sha256:829e9d | 268kB  |
| docker.io/kindest/kindnetd                  | v20240202-8f1494ea | sha256:4740c1 | 25.3MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| docker.io/library/nginx                     | alpine             | sha256:b8c826 | 17.6MB |
| registry.k8s.io/etcd                        | 3.5.12-0           | sha256:014faa | 66.2MB |
| registry.k8s.io/kube-apiserver              | v1.29.3            | sha256:258111 | 32.1MB |
| docker.io/library/nginx                     | latest             | sha256:070027 | 67.2MB |
| registry.k8s.io/kube-proxy                  | v1.29.3            | sha256:0e9b4a | 25MB   |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| docker.io/library/minikube-local-cache-test | functional-805196  | sha256:e3dda6 | 989B   |
| registry.k8s.io/coredns/coredns             | v1.11.1            | sha256:2437cf | 16.5MB |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-805196 image ls --format table --alsologtostderr:
I0401 10:36:37.812166  479486 out.go:291] Setting OutFile to fd 1 ...
I0401 10:36:37.812532  479486 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0401 10:36:37.812569  479486 out.go:304] Setting ErrFile to fd 2...
I0401 10:36:37.812590  479486 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0401 10:36:37.812933  479486 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18551-440344/.minikube/bin
I0401 10:36:37.813810  479486 config.go:182] Loaded profile config "functional-805196": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0401 10:36:37.814081  479486 config.go:182] Loaded profile config "functional-805196": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0401 10:36:37.814669  479486 cli_runner.go:164] Run: docker container inspect functional-805196 --format={{.State.Status}}
I0401 10:36:37.845633  479486 ssh_runner.go:195] Run: systemctl --version
I0401 10:36:37.845718  479486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-805196
I0401 10:36:37.870866  479486 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/18551-440344/.minikube/machines/functional-805196/id_rsa Username:docker}
I0401 10:36:37.966110  479486 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-805196 image ls --format json --alsologtostderr:
[{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"16482581"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTa
gs":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:e3dda6a1b6b05888407a7ab181e69c7dcca4bddd85c5de69f0999edf164a29d1","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-805196"],"size":"989"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd","repoDigests":["registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"66189079"},{"id":"sha256:0e9b4a0d1e86d942f5ed93eaf751771e7602104cac5e15256c36967770ad2775","repoDigests":["registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863"],"repoTags":["registry.k8s.io/kube-proxy:v1.29.3
"],"size":"25039677"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:2581114f5709d3459ca39f243fd21fde75f2f60d205ffdcd57b4207c33980794","repoDigests":["registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.29.3"],"size":"32143347"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:121d70d9a3805f44c7c587a60d9360495cf9d95129047f4818bb7110ec1ec195","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.29.3"],"size":"30578527"},{"id":"sha2
56:4b51f9f6bc9b9a68473278361df0e8985109b56c7b649532c6bffcab2a8c65fb","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a"],"repoTags":["registry.k8s.io/kube-scheduler:v1.29.3"],"size":"16931371"},{"id":"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"268051"},{"id":"sha256:4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d","repoDigests":["docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988"],"repoTags":["docker.io/kindest/kindnetd:v20240202-8f1494ea"],"size":"25336339"},{"id":"sha256:b8c82647e8a2586145e422943ae4c69c9b1600db636e1269efd256360eb396b0","repoDigests":["docker.io/library/nginx@sha256:31bad00311cb5eeb8a6648beadcf67277a175da89989f14727420a80e2e76742"],"repoTags":["docker.io/library/nginx:alp
ine"],"size":"17601398"},{"id":"sha256:070027a3cbe09ac697570e31174acc1699701bd0626d2cf71e01623f41a10f53","repoDigests":["docker.io/library/nginx@sha256:6db391d1c0cfb30588ba0bf72ea999404f2764febf0f1f196acd5867ac7efa7e"],"repoTags":["docker.io/library/nginx:latest"],"size":"67216851"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-805196 image ls --format json --alsologtostderr:
I0401 10:36:37.789133  479482 out.go:291] Setting OutFile to fd 1 ...
I0401 10:36:37.789305  479482 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0401 10:36:37.789313  479482 out.go:304] Setting ErrFile to fd 2...
I0401 10:36:37.789318  479482 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0401 10:36:37.789564  479482 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18551-440344/.minikube/bin
I0401 10:36:37.790223  479482 config.go:182] Loaded profile config "functional-805196": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0401 10:36:37.790344  479482 config.go:182] Loaded profile config "functional-805196": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0401 10:36:37.790831  479482 cli_runner.go:164] Run: docker container inspect functional-805196 --format={{.State.Status}}
I0401 10:36:37.813956  479482 ssh_runner.go:195] Run: systemctl --version
I0401 10:36:37.814012  479482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-805196
I0401 10:36:37.834165  479482 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/18551-440344/.minikube/machines/functional-805196/id_rsa Username:docker}
I0401 10:36:37.933965  479482 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-805196 image ls --format yaml --alsologtostderr:
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:121d70d9a3805f44c7c587a60d9360495cf9d95129047f4818bb7110ec1ec195
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104
repoTags:
- registry.k8s.io/kube-controller-manager:v1.29.3
size: "30578527"
- id: sha256:e3dda6a1b6b05888407a7ab181e69c7dcca4bddd85c5de69f0999edf164a29d1
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-805196
size: "989"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "268051"
- id: sha256:b8c82647e8a2586145e422943ae4c69c9b1600db636e1269efd256360eb396b0
repoDigests:
- docker.io/library/nginx@sha256:31bad00311cb5eeb8a6648beadcf67277a175da89989f14727420a80e2e76742
repoTags:
- docker.io/library/nginx:alpine
size: "17601398"
- id: sha256:070027a3cbe09ac697570e31174acc1699701bd0626d2cf71e01623f41a10f53
repoDigests:
- docker.io/library/nginx@sha256:6db391d1c0cfb30588ba0bf72ea999404f2764febf0f1f196acd5867ac7efa7e
repoTags:
- docker.io/library/nginx:latest
size: "67216851"
- id: sha256:2581114f5709d3459ca39f243fd21fde75f2f60d205ffdcd57b4207c33980794
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c
repoTags:
- registry.k8s.io/kube-apiserver:v1.29.3
size: "32143347"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:0e9b4a0d1e86d942f5ed93eaf751771e7602104cac5e15256c36967770ad2775
repoDigests:
- registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863
repoTags:
- registry.k8s.io/kube-proxy:v1.29.3
size: "25039677"
- id: sha256:4b51f9f6bc9b9a68473278361df0e8985109b56c7b649532c6bffcab2a8c65fb
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a
repoTags:
- registry.k8s.io/kube-scheduler:v1.29.3
size: "16931371"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d
repoDigests:
- docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988
repoTags:
- docker.io/kindest/kindnetd:v20240202-8f1494ea
size: "25336339"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "16482581"
- id: sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd
repoDigests:
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "66189079"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-805196 image ls --format yaml --alsologtostderr:
I0401 10:36:37.500496  479424 out.go:291] Setting OutFile to fd 1 ...
I0401 10:36:37.500767  479424 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0401 10:36:37.500803  479424 out.go:304] Setting ErrFile to fd 2...
I0401 10:36:37.500846  479424 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0401 10:36:37.501229  479424 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18551-440344/.minikube/bin
I0401 10:36:37.502141  479424 config.go:182] Loaded profile config "functional-805196": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0401 10:36:37.502374  479424 config.go:182] Loaded profile config "functional-805196": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0401 10:36:37.503201  479424 cli_runner.go:164] Run: docker container inspect functional-805196 --format={{.State.Status}}
I0401 10:36:37.533296  479424 ssh_runner.go:195] Run: systemctl --version
I0401 10:36:37.533364  479424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-805196
I0401 10:36:37.552864  479424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/18551-440344/.minikube/machines/functional-805196/id_rsa Username:docker}
I0401 10:36:37.650290  479424 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-805196 ssh pgrep buildkitd: exit status 1 (316.009191ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 image build -t localhost/my-image:functional-805196 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-805196 image build -t localhost/my-image:functional-805196 testdata/build --alsologtostderr: (2.083045372s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-805196 image build -t localhost/my-image:functional-805196 testdata/build --alsologtostderr:
I0401 10:36:38.365126  479587 out.go:291] Setting OutFile to fd 1 ...
I0401 10:36:38.365627  479587 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0401 10:36:38.365641  479587 out.go:304] Setting ErrFile to fd 2...
I0401 10:36:38.365656  479587 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0401 10:36:38.365902  479587 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18551-440344/.minikube/bin
I0401 10:36:38.366520  479587 config.go:182] Loaded profile config "functional-805196": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0401 10:36:38.367624  479587 config.go:182] Loaded profile config "functional-805196": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
I0401 10:36:38.368234  479587 cli_runner.go:164] Run: docker container inspect functional-805196 --format={{.State.Status}}
I0401 10:36:38.384267  479587 ssh_runner.go:195] Run: systemctl --version
I0401 10:36:38.384317  479587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-805196
I0401 10:36:38.400266  479587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/18551-440344/.minikube/machines/functional-805196/id_rsa Username:docker}
I0401 10:36:38.493558  479587 build_images.go:161] Building image from path: /tmp/build.3822661252.tar
I0401 10:36:38.493628  479587 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0401 10:36:38.502970  479587 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3822661252.tar
I0401 10:36:38.507041  479587 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3822661252.tar: stat -c "%s %y" /var/lib/minikube/build/build.3822661252.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3822661252.tar': No such file or directory
I0401 10:36:38.507074  479587 ssh_runner.go:362] scp /tmp/build.3822661252.tar --> /var/lib/minikube/build/build.3822661252.tar (3072 bytes)
I0401 10:36:38.532819  479587 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3822661252
I0401 10:36:38.542306  479587 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3822661252 -xf /var/lib/minikube/build/build.3822661252.tar
I0401 10:36:38.551433  479587 containerd.go:394] Building image: /var/lib/minikube/build/build.3822661252
I0401 10:36:38.551535  479587 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3822661252 --local dockerfile=/var/lib/minikube/build/build.3822661252 --output type=image,name=localhost/my-image:functional-805196
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.7s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.2s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:5d855132eeec6848ccf54bbefd8893992cdc57e96c59bcd17ecbe8a527acf5e5
#8 exporting manifest sha256:5d855132eeec6848ccf54bbefd8893992cdc57e96c59bcd17ecbe8a527acf5e5 0.0s done
#8 exporting config sha256:1e58fa566ce7f089fd06eafa67299522e5451f668d927f8f808cb1479c32b4a5 0.0s done
#8 naming to localhost/my-image:functional-805196 done
#8 DONE 0.2s
I0401 10:36:40.353023  479587 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3822661252 --local dockerfile=/var/lib/minikube/build/build.3822661252 --output type=image,name=localhost/my-image:functional-805196: (1.801456497s)
I0401 10:36:40.353123  479587 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3822661252
I0401 10:36:40.362307  479587 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3822661252.tar
I0401 10:36:40.373100  479587 build_images.go:217] Built localhost/my-image:functional-805196 from /tmp/build.3822661252.tar
I0401 10:36:40.373128  479587 build_images.go:133] succeeded building to: functional-805196
I0401 10:36:40.373133  479587 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.31738158s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-805196
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.34s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 image rm gcr.io/google-containers/addon-resizer:functional-805196 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-805196
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-805196 image save --daemon gcr.io/google-containers/addon-resizer:functional-805196 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-805196
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.64s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-805196
--- PASS: TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-805196
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-805196
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (125.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-888048 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0401 10:37:09.588804  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/addons-126557/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-888048 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (2m4.416685823s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-888048 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (125.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (30.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-888048 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-888048 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-888048 -- rollout status deployment/busybox: (27.213781312s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-888048 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-888048 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-888048 -- exec busybox-7fdf7869d9-h89gp -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-888048 -- exec busybox-7fdf7869d9-hmdbh -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-888048 -- exec busybox-7fdf7869d9-t2djc -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-888048 -- exec busybox-7fdf7869d9-h89gp -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-888048 -- exec busybox-7fdf7869d9-hmdbh -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-888048 -- exec busybox-7fdf7869d9-t2djc -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-888048 -- exec busybox-7fdf7869d9-h89gp -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-888048 -- exec busybox-7fdf7869d9-hmdbh -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-888048 -- exec busybox-7fdf7869d9-t2djc -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (30.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-888048 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-888048 -- exec busybox-7fdf7869d9-h89gp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-888048 -- exec busybox-7fdf7869d9-h89gp -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-888048 -- exec busybox-7fdf7869d9-hmdbh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-888048 -- exec busybox-7fdf7869d9-hmdbh -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-888048 -- exec busybox-7fdf7869d9-t2djc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-888048 -- exec busybox-7fdf7869d9-t2djc -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (20.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-888048 -v=7 --alsologtostderr
E0401 10:39:25.741989  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/addons-126557/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-888048 -v=7 --alsologtostderr: (19.525132254s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-888048 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-888048 status -v=7 --alsologtostderr: (1.023241845s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (20.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-888048 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-888048 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-linux-arm64 -p ha-888048 status --output json -v=7 --alsologtostderr: (1.068872261s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-888048 cp testdata/cp-test.txt ha-888048:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-888048 ssh -n ha-888048 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-888048 cp ha-888048:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile916651845/001/cp-test_ha-888048.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-888048 ssh -n ha-888048 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-888048 cp ha-888048:/home/docker/cp-test.txt ha-888048-m02:/home/docker/cp-test_ha-888048_ha-888048-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-888048 ssh -n ha-888048 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-888048 ssh -n ha-888048-m02 "sudo cat /home/docker/cp-test_ha-888048_ha-888048-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-888048 cp ha-888048:/home/docker/cp-test.txt ha-888048-m03:/home/docker/cp-test_ha-888048_ha-888048-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-888048 ssh -n ha-888048 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-888048 ssh -n ha-888048-m03 "sudo cat /home/docker/cp-test_ha-888048_ha-888048-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-888048 cp ha-888048:/home/docker/cp-test.txt ha-888048-m04:/home/docker/cp-test_ha-888048_ha-888048-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-888048 ssh -n ha-888048 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-888048 ssh -n ha-888048-m04 "sudo cat /home/docker/cp-test_ha-888048_ha-888048-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-888048 cp testdata/cp-test.txt ha-888048-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-888048 ssh -n ha-888048-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-888048 cp ha-888048-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile916651845/001/cp-test_ha-888048-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-888048 ssh -n ha-888048-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-888048 cp ha-888048-m02:/home/docker/cp-test.txt ha-888048:/home/docker/cp-test_ha-888048-m02_ha-888048.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-888048 ssh -n ha-888048-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-888048 ssh -n ha-888048 "sudo cat /home/docker/cp-test_ha-888048-m02_ha-888048.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-888048 cp ha-888048-m02:/home/docker/cp-test.txt ha-888048-m03:/home/docker/cp-test_ha-888048-m02_ha-888048-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-888048 ssh -n ha-888048-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-888048 ssh -n ha-888048-m03 "sudo cat /home/docker/cp-test_ha-888048-m02_ha-888048-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-888048 cp ha-888048-m02:/home/docker/cp-test.txt ha-888048-m04:/home/docker/cp-test_ha-888048-m02_ha-888048-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-888048 ssh -n ha-888048-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-888048 ssh -n ha-888048-m04 "sudo cat /home/docker/cp-test_ha-888048-m02_ha-888048-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-888048 cp testdata/cp-test.txt ha-888048-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-888048 ssh -n ha-888048-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-888048 cp ha-888048-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile916651845/001/cp-test_ha-888048-m03.txt
E0401 10:39:53.429639  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/addons-126557/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-888048 ssh -n ha-888048-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-888048 cp ha-888048-m03:/home/docker/cp-test.txt ha-888048:/home/docker/cp-test_ha-888048-m03_ha-888048.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-888048 ssh -n ha-888048-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-888048 ssh -n ha-888048 "sudo cat /home/docker/cp-test_ha-888048-m03_ha-888048.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-888048 cp ha-888048-m03:/home/docker/cp-test.txt ha-888048-m02:/home/docker/cp-test_ha-888048-m03_ha-888048-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-888048 ssh -n ha-888048-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-888048 ssh -n ha-888048-m02 "sudo cat /home/docker/cp-test_ha-888048-m03_ha-888048-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-888048 cp ha-888048-m03:/home/docker/cp-test.txt ha-888048-m04:/home/docker/cp-test_ha-888048-m03_ha-888048-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-888048 ssh -n ha-888048-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-888048 ssh -n ha-888048-m04 "sudo cat /home/docker/cp-test_ha-888048-m03_ha-888048-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-888048 cp testdata/cp-test.txt ha-888048-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-888048 ssh -n ha-888048-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-888048 cp ha-888048-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile916651845/001/cp-test_ha-888048-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-888048 ssh -n ha-888048-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-888048 cp ha-888048-m04:/home/docker/cp-test.txt ha-888048:/home/docker/cp-test_ha-888048-m04_ha-888048.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-888048 ssh -n ha-888048-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-888048 ssh -n ha-888048 "sudo cat /home/docker/cp-test_ha-888048-m04_ha-888048.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-888048 cp ha-888048-m04:/home/docker/cp-test.txt ha-888048-m02:/home/docker/cp-test_ha-888048-m04_ha-888048-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-888048 ssh -n ha-888048-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-888048 ssh -n ha-888048-m02 "sudo cat /home/docker/cp-test_ha-888048-m04_ha-888048-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-888048 cp ha-888048-m04:/home/docker/cp-test.txt ha-888048-m03:/home/docker/cp-test_ha-888048-m04_ha-888048-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-888048 ssh -n ha-888048-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-888048 ssh -n ha-888048-m03 "sudo cat /home/docker/cp-test_ha-888048-m04_ha-888048-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-888048 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-888048 node stop m02 -v=7 --alsologtostderr: (12.155927873s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-888048 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-888048 status -v=7 --alsologtostderr: exit status 7 (822.213674ms)

                                                
                                                
-- stdout --
	ha-888048
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-888048-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-888048-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-888048-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0401 10:40:14.777620  494927 out.go:291] Setting OutFile to fd 1 ...
	I0401 10:40:14.777817  494927 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 10:40:14.777844  494927 out.go:304] Setting ErrFile to fd 2...
	I0401 10:40:14.777866  494927 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 10:40:14.778241  494927 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18551-440344/.minikube/bin
	I0401 10:40:14.778530  494927 out.go:298] Setting JSON to false
	I0401 10:40:14.778588  494927 mustload.go:65] Loading cluster: ha-888048
	I0401 10:40:14.779381  494927 config.go:182] Loaded profile config "ha-888048": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0401 10:40:14.779420  494927 status.go:255] checking status of ha-888048 ...
	I0401 10:40:14.780679  494927 notify.go:220] Checking for updates...
	I0401 10:40:14.780781  494927 cli_runner.go:164] Run: docker container inspect ha-888048 --format={{.State.Status}}
	I0401 10:40:14.800142  494927 status.go:330] ha-888048 host status = "Running" (err=<nil>)
	I0401 10:40:14.800168  494927 host.go:66] Checking if "ha-888048" exists ...
	I0401 10:40:14.800479  494927 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-888048
	I0401 10:40:14.825345  494927 host.go:66] Checking if "ha-888048" exists ...
	I0401 10:40:14.825709  494927 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 10:40:14.825777  494927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-888048
	I0401 10:40:14.856259  494927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/18551-440344/.minikube/machines/ha-888048/id_rsa Username:docker}
	I0401 10:40:14.960437  494927 ssh_runner.go:195] Run: systemctl --version
	I0401 10:40:14.965257  494927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 10:40:14.979146  494927 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 10:40:15.072053  494927 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:true NGoroutines:72 SystemTime:2024-04-01 10:40:15.059031375 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215101440 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0401 10:40:15.072760  494927 kubeconfig.go:125] found "ha-888048" server: "https://192.168.49.254:8443"
	I0401 10:40:15.072797  494927 api_server.go:166] Checking apiserver status ...
	I0401 10:40:15.072847  494927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 10:40:15.087130  494927 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1507/cgroup
	I0401 10:40:15.098678  494927 api_server.go:182] apiserver freezer: "11:freezer:/docker/3d9006710b05f5329e7ecd207e7830b55706b7b1e7b70496cf1c0a84df651d98/kubepods/burstable/pod499934d733e40d16287521f92f0b3e9c/fb4396ed7e5161b81a7b00841e291d76393fb9754879729ed1882daeb1d1292a"
	I0401 10:40:15.098813  494927 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/3d9006710b05f5329e7ecd207e7830b55706b7b1e7b70496cf1c0a84df651d98/kubepods/burstable/pod499934d733e40d16287521f92f0b3e9c/fb4396ed7e5161b81a7b00841e291d76393fb9754879729ed1882daeb1d1292a/freezer.state
	I0401 10:40:15.110443  494927 api_server.go:204] freezer state: "THAWED"
	I0401 10:40:15.110485  494927 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0401 10:40:15.126924  494927 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0401 10:40:15.126972  494927 status.go:422] ha-888048 apiserver status = Running (err=<nil>)
	I0401 10:40:15.126999  494927 status.go:257] ha-888048 status: &{Name:ha-888048 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0401 10:40:15.127024  494927 status.go:255] checking status of ha-888048-m02 ...
	I0401 10:40:15.127510  494927 cli_runner.go:164] Run: docker container inspect ha-888048-m02 --format={{.State.Status}}
	I0401 10:40:15.149554  494927 status.go:330] ha-888048-m02 host status = "Stopped" (err=<nil>)
	I0401 10:40:15.149581  494927 status.go:343] host is not running, skipping remaining checks
	I0401 10:40:15.149589  494927 status.go:257] ha-888048-m02 status: &{Name:ha-888048-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0401 10:40:15.149610  494927 status.go:255] checking status of ha-888048-m03 ...
	I0401 10:40:15.149997  494927 cli_runner.go:164] Run: docker container inspect ha-888048-m03 --format={{.State.Status}}
	I0401 10:40:15.181467  494927 status.go:330] ha-888048-m03 host status = "Running" (err=<nil>)
	I0401 10:40:15.181500  494927 host.go:66] Checking if "ha-888048-m03" exists ...
	I0401 10:40:15.181799  494927 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-888048-m03
	I0401 10:40:15.201452  494927 host.go:66] Checking if "ha-888048-m03" exists ...
	I0401 10:40:15.201826  494927 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 10:40:15.201886  494927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-888048-m03
	I0401 10:40:15.219804  494927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33198 SSHKeyPath:/home/jenkins/minikube-integration/18551-440344/.minikube/machines/ha-888048-m03/id_rsa Username:docker}
	I0401 10:40:15.315399  494927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 10:40:15.331235  494927 kubeconfig.go:125] found "ha-888048" server: "https://192.168.49.254:8443"
	I0401 10:40:15.331266  494927 api_server.go:166] Checking apiserver status ...
	I0401 10:40:15.331314  494927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 10:40:15.344292  494927 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1337/cgroup
	I0401 10:40:15.354310  494927 api_server.go:182] apiserver freezer: "11:freezer:/docker/d0dff53bab3610cd549953ce63a4da22f218a6f6ca7387b76df96fa652fd4526/kubepods/burstable/podde4c9e6ed9446909c051e0773c640efe/126b42dd03f50be73fdfede651eba208d70b9b6fd9eee9aee33a100d7063c6e5"
	I0401 10:40:15.354398  494927 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/d0dff53bab3610cd549953ce63a4da22f218a6f6ca7387b76df96fa652fd4526/kubepods/burstable/podde4c9e6ed9446909c051e0773c640efe/126b42dd03f50be73fdfede651eba208d70b9b6fd9eee9aee33a100d7063c6e5/freezer.state
	I0401 10:40:15.363448  494927 api_server.go:204] freezer state: "THAWED"
	I0401 10:40:15.363529  494927 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0401 10:40:15.371346  494927 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0401 10:40:15.371375  494927 status.go:422] ha-888048-m03 apiserver status = Running (err=<nil>)
	I0401 10:40:15.371385  494927 status.go:257] ha-888048-m03 status: &{Name:ha-888048-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0401 10:40:15.371403  494927 status.go:255] checking status of ha-888048-m04 ...
	I0401 10:40:15.371698  494927 cli_runner.go:164] Run: docker container inspect ha-888048-m04 --format={{.State.Status}}
	I0401 10:40:15.387312  494927 status.go:330] ha-888048-m04 host status = "Running" (err=<nil>)
	I0401 10:40:15.387337  494927 host.go:66] Checking if "ha-888048-m04" exists ...
	I0401 10:40:15.387654  494927 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-888048-m04
	I0401 10:40:15.404154  494927 host.go:66] Checking if "ha-888048-m04" exists ...
	I0401 10:40:15.404460  494927 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 10:40:15.404506  494927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-888048-m04
	I0401 10:40:15.420725  494927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33203 SSHKeyPath:/home/jenkins/minikube-integration/18551-440344/.minikube/machines/ha-888048-m04/id_rsa Username:docker}
	I0401 10:40:15.514425  494927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 10:40:15.526155  494927 status.go:257] ha-888048-m04 status: &{Name:ha-888048-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (18.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-888048 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-888048 node start m02 -v=7 --alsologtostderr: (16.929762264s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-888048 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-888048 status -v=7 --alsologtostderr: (1.002703379s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (18.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (158.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-888048 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-888048 -v=7 --alsologtostderr
E0401 10:40:38.656102  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/functional-805196/client.crt: no such file or directory
E0401 10:40:38.661716  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/functional-805196/client.crt: no such file or directory
E0401 10:40:38.671960  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/functional-805196/client.crt: no such file or directory
E0401 10:40:38.692514  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/functional-805196/client.crt: no such file or directory
E0401 10:40:38.733343  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/functional-805196/client.crt: no such file or directory
E0401 10:40:38.813667  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/functional-805196/client.crt: no such file or directory
E0401 10:40:38.974099  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/functional-805196/client.crt: no such file or directory
E0401 10:40:39.294652  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/functional-805196/client.crt: no such file or directory
E0401 10:40:39.935362  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/functional-805196/client.crt: no such file or directory
E0401 10:40:41.216302  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/functional-805196/client.crt: no such file or directory
E0401 10:40:43.776540  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/functional-805196/client.crt: no such file or directory
E0401 10:40:48.897195  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/functional-805196/client.crt: no such file or directory
E0401 10:40:59.138270  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/functional-805196/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-888048 -v=7 --alsologtostderr: (37.563292052s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-888048 --wait=true -v=7 --alsologtostderr
E0401 10:41:19.618505  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/functional-805196/client.crt: no such file or directory
E0401 10:42:00.579711  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/functional-805196/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-888048 --wait=true -v=7 --alsologtostderr: (2m1.067797119s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-888048
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (158.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-888048 node delete m03 -v=7 --alsologtostderr
E0401 10:43:22.500752  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/functional-805196/client.crt: no such file or directory
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-888048 node delete m03 -v=7 --alsologtostderr: (10.476390193s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-888048 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-888048 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-888048 stop -v=7 --alsologtostderr: (35.859772551s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-888048 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-888048 status -v=7 --alsologtostderr: exit status 7 (112.339532ms)

                                                
                                                
-- stdout --
	ha-888048
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-888048-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-888048-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0401 10:44:01.742326  508603 out.go:291] Setting OutFile to fd 1 ...
	I0401 10:44:01.742517  508603 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 10:44:01.742528  508603 out.go:304] Setting ErrFile to fd 2...
	I0401 10:44:01.742539  508603 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 10:44:01.742793  508603 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18551-440344/.minikube/bin
	I0401 10:44:01.742978  508603 out.go:298] Setting JSON to false
	I0401 10:44:01.743007  508603 mustload.go:65] Loading cluster: ha-888048
	I0401 10:44:01.743067  508603 notify.go:220] Checking for updates...
	I0401 10:44:01.743429  508603 config.go:182] Loaded profile config "ha-888048": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0401 10:44:01.743440  508603 status.go:255] checking status of ha-888048 ...
	I0401 10:44:01.744259  508603 cli_runner.go:164] Run: docker container inspect ha-888048 --format={{.State.Status}}
	I0401 10:44:01.762013  508603 status.go:330] ha-888048 host status = "Stopped" (err=<nil>)
	I0401 10:44:01.762040  508603 status.go:343] host is not running, skipping remaining checks
	I0401 10:44:01.762049  508603 status.go:257] ha-888048 status: &{Name:ha-888048 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0401 10:44:01.762097  508603 status.go:255] checking status of ha-888048-m02 ...
	I0401 10:44:01.762418  508603 cli_runner.go:164] Run: docker container inspect ha-888048-m02 --format={{.State.Status}}
	I0401 10:44:01.777643  508603 status.go:330] ha-888048-m02 host status = "Stopped" (err=<nil>)
	I0401 10:44:01.777668  508603 status.go:343] host is not running, skipping remaining checks
	I0401 10:44:01.777675  508603 status.go:257] ha-888048-m02 status: &{Name:ha-888048-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0401 10:44:01.777697  508603 status.go:255] checking status of ha-888048-m04 ...
	I0401 10:44:01.778018  508603 cli_runner.go:164] Run: docker container inspect ha-888048-m04 --format={{.State.Status}}
	I0401 10:44:01.793817  508603 status.go:330] ha-888048-m04 host status = "Stopped" (err=<nil>)
	I0401 10:44:01.793838  508603 status.go:343] host is not running, skipping remaining checks
	I0401 10:44:01.793845  508603 status.go:257] ha-888048-m04 status: &{Name:ha-888048-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (79.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-888048 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0401 10:44:25.741793  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/addons-126557/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-888048 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m18.001864616s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-888048 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (79.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (44.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-888048 --control-plane -v=7 --alsologtostderr
E0401 10:45:38.656091  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/functional-805196/client.crt: no such file or directory
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-888048 --control-plane -v=7 --alsologtostderr: (42.942965471s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-888048 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-888048 status -v=7 --alsologtostderr: (1.065380654s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (44.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.79s)

                                                
                                    
x
+
TestJSONOutput/start/Command (56.35s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-569172 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-569172 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (56.323341434s)
--- PASS: TestJSONOutput/start/Command (56.35s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.77s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-569172 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.77s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-569172 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.77s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-569172 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-569172 --output=json --user=testUser: (5.774470915s)
--- PASS: TestJSONOutput/stop/Command (5.77s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-434372 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-434372 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (95.985856ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f8575030-1c75-4e54-98d9-54e8dc105e11","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-434372] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6ecadc59-1282-4891-9e11-48bddb7be1fc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18551"}}
	{"specversion":"1.0","id":"69f48c50-56f2-4115-830a-5d295b8f3cd1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"8df5a1b2-ac48-4968-b02f-ade3f2ba0971","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18551-440344/kubeconfig"}}
	{"specversion":"1.0","id":"fea7fc1e-6eb2-493e-a090-bff3f17eb109","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18551-440344/.minikube"}}
	{"specversion":"1.0","id":"3504efc9-a38f-4275-9fde-71221a84597e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"373c5cc6-7fa1-4997-8b09-8bc8a3c85386","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f914911e-0302-4ca1-924b-71e6ff82247e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-434372" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-434372
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (44.36s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-838111 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-838111 --network=: (42.229955684s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-838111" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-838111
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-838111: (2.11557247s)
--- PASS: TestKicCustomNetwork/create_custom_network (44.36s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (37.55s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-861909 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-861909 --network=bridge: (35.525887982s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-861909" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-861909
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-861909: (1.998026253s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (37.55s)

                                                
                                    
x
+
TestKicExistingNetwork (33.51s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-577888 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-577888 --network=existing-network: (31.316000826s)
helpers_test.go:175: Cleaning up "existing-network-577888" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-577888
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-577888: (2.056324832s)
--- PASS: TestKicExistingNetwork (33.51s)

                                                
                                    
x
+
TestKicCustomSubnet (36.92s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-664754 --subnet=192.168.60.0/24
E0401 10:49:25.742306  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/addons-126557/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-664754 --subnet=192.168.60.0/24: (34.772833466s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-664754 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-664754" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-664754
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-664754: (2.128462696s)
--- PASS: TestKicCustomSubnet (36.92s)

                                                
                                    
x
+
TestKicStaticIP (32.97s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-663307 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-663307 --static-ip=192.168.200.200: (30.723516837s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-663307 ip
helpers_test.go:175: Cleaning up "static-ip-663307" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-663307
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-663307: (2.107779919s)
--- PASS: TestKicStaticIP (32.97s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (68.5s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-076706 --driver=docker  --container-runtime=containerd
E0401 10:50:38.656109  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/functional-805196/client.crt: no such file or directory
E0401 10:50:48.790382  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/addons-126557/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-076706 --driver=docker  --container-runtime=containerd: (31.257987511s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-079521 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-079521 --driver=docker  --container-runtime=containerd: (31.847345913s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-076706
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-079521
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-079521" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-079521
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-079521: (1.95177537s)
helpers_test.go:175: Cleaning up "first-076706" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-076706
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-076706: (2.197239077s)
--- PASS: TestMinikubeProfile (68.50s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.71s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-321097 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-321097 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.712186717s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-321097 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.25s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-335045 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-335045 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.248131609s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.25s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-335045 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-321097 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-321097 --alsologtostderr -v=5: (1.621484291s)
--- PASS: TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-335045 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-335045
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-335045: (1.201400489s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.28s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-335045
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-335045: (6.284338381s)
--- PASS: TestMountStart/serial/RestartStopped (7.28s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.64s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-335045 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.64s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (79.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-665938 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-665938 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m19.446838785s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-665938 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (79.97s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-665938 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-665938 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-665938 -- rollout status deployment/busybox: (2.544614984s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-665938 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-665938 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-665938 -- exec busybox-7fdf7869d9-qq6zk -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-665938 -- exec busybox-7fdf7869d9-rx8ll -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-665938 -- exec busybox-7fdf7869d9-qq6zk -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-665938 -- exec busybox-7fdf7869d9-rx8ll -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-665938 -- exec busybox-7fdf7869d9-qq6zk -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-665938 -- exec busybox-7fdf7869d9-rx8ll -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.53s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-665938 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-665938 -- exec busybox-7fdf7869d9-qq6zk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-665938 -- exec busybox-7fdf7869d9-qq6zk -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-665938 -- exec busybox-7fdf7869d9-rx8ll -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-665938 -- exec busybox-7fdf7869d9-rx8ll -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.07s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (17.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-665938 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-665938 -v 3 --alsologtostderr: (16.427047054s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-665938 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (17.14s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-665938 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-665938 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-665938 cp testdata/cp-test.txt multinode-665938:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-665938 ssh -n multinode-665938 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-665938 cp multinode-665938:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2688805672/001/cp-test_multinode-665938.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-665938 ssh -n multinode-665938 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-665938 cp multinode-665938:/home/docker/cp-test.txt multinode-665938-m02:/home/docker/cp-test_multinode-665938_multinode-665938-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-665938 ssh -n multinode-665938 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-665938 ssh -n multinode-665938-m02 "sudo cat /home/docker/cp-test_multinode-665938_multinode-665938-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-665938 cp multinode-665938:/home/docker/cp-test.txt multinode-665938-m03:/home/docker/cp-test_multinode-665938_multinode-665938-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-665938 ssh -n multinode-665938 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-665938 ssh -n multinode-665938-m03 "sudo cat /home/docker/cp-test_multinode-665938_multinode-665938-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-665938 cp testdata/cp-test.txt multinode-665938-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-665938 ssh -n multinode-665938-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-665938 cp multinode-665938-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2688805672/001/cp-test_multinode-665938-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-665938 ssh -n multinode-665938-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-665938 cp multinode-665938-m02:/home/docker/cp-test.txt multinode-665938:/home/docker/cp-test_multinode-665938-m02_multinode-665938.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-665938 ssh -n multinode-665938-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-665938 ssh -n multinode-665938 "sudo cat /home/docker/cp-test_multinode-665938-m02_multinode-665938.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-665938 cp multinode-665938-m02:/home/docker/cp-test.txt multinode-665938-m03:/home/docker/cp-test_multinode-665938-m02_multinode-665938-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-665938 ssh -n multinode-665938-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-665938 ssh -n multinode-665938-m03 "sudo cat /home/docker/cp-test_multinode-665938-m02_multinode-665938-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-665938 cp testdata/cp-test.txt multinode-665938-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-665938 ssh -n multinode-665938-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-665938 cp multinode-665938-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2688805672/001/cp-test_multinode-665938-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-665938 ssh -n multinode-665938-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-665938 cp multinode-665938-m03:/home/docker/cp-test.txt multinode-665938:/home/docker/cp-test_multinode-665938-m03_multinode-665938.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-665938 ssh -n multinode-665938-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-665938 ssh -n multinode-665938 "sudo cat /home/docker/cp-test_multinode-665938-m03_multinode-665938.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-665938 cp multinode-665938-m03:/home/docker/cp-test.txt multinode-665938-m02:/home/docker/cp-test_multinode-665938-m03_multinode-665938-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-665938 ssh -n multinode-665938-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-665938 ssh -n multinode-665938-m02 "sudo cat /home/docker/cp-test_multinode-665938-m03_multinode-665938-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.22s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-665938 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-665938 node stop m03: (1.231703198s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-665938 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-665938 status: exit status 7 (509.99667ms)

                                                
                                                
-- stdout --
	multinode-665938
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-665938-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-665938-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-665938 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-665938 status --alsologtostderr: exit status 7 (525.463159ms)

                                                
                                                
-- stdout --
	multinode-665938
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-665938-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-665938-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0401 10:53:57.648826  560629 out.go:291] Setting OutFile to fd 1 ...
	I0401 10:53:57.648968  560629 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 10:53:57.648974  560629 out.go:304] Setting ErrFile to fd 2...
	I0401 10:53:57.648979  560629 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 10:53:57.649256  560629 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18551-440344/.minikube/bin
	I0401 10:53:57.649436  560629 out.go:298] Setting JSON to false
	I0401 10:53:57.649483  560629 mustload.go:65] Loading cluster: multinode-665938
	I0401 10:53:57.649577  560629 notify.go:220] Checking for updates...
	I0401 10:53:57.650525  560629 config.go:182] Loaded profile config "multinode-665938": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0401 10:53:57.650548  560629 status.go:255] checking status of multinode-665938 ...
	I0401 10:53:57.651093  560629 cli_runner.go:164] Run: docker container inspect multinode-665938 --format={{.State.Status}}
	I0401 10:53:57.668234  560629 status.go:330] multinode-665938 host status = "Running" (err=<nil>)
	I0401 10:53:57.668256  560629 host.go:66] Checking if "multinode-665938" exists ...
	I0401 10:53:57.668542  560629 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-665938
	I0401 10:53:57.684619  560629 host.go:66] Checking if "multinode-665938" exists ...
	I0401 10:53:57.685074  560629 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 10:53:57.685178  560629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-665938
	I0401 10:53:57.702523  560629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33311 SSHKeyPath:/home/jenkins/minikube-integration/18551-440344/.minikube/machines/multinode-665938/id_rsa Username:docker}
	I0401 10:53:57.798500  560629 ssh_runner.go:195] Run: systemctl --version
	I0401 10:53:57.802905  560629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 10:53:57.814421  560629 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 10:53:57.880233  560629 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:62 SystemTime:2024-04-01 10:53:57.870687812 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215101440 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0401 10:53:57.880889  560629 kubeconfig.go:125] found "multinode-665938" server: "https://192.168.58.2:8443"
	I0401 10:53:57.880929  560629 api_server.go:166] Checking apiserver status ...
	I0401 10:53:57.881041  560629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 10:53:57.894874  560629 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1429/cgroup
	I0401 10:53:57.904462  560629 api_server.go:182] apiserver freezer: "11:freezer:/docker/4a92b7e19440652584141492c24b16e193fa264dedd0c4b95c8770a1769a6dbb/kubepods/burstable/podfd1b786eb295e86fda12b82890838146/8379cde61a931e25e8cd636d9c276ded81f83c785a358877230f0bf8bf0847cd"
	I0401 10:53:57.904540  560629 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/4a92b7e19440652584141492c24b16e193fa264dedd0c4b95c8770a1769a6dbb/kubepods/burstable/podfd1b786eb295e86fda12b82890838146/8379cde61a931e25e8cd636d9c276ded81f83c785a358877230f0bf8bf0847cd/freezer.state
	I0401 10:53:57.913336  560629 api_server.go:204] freezer state: "THAWED"
	I0401 10:53:57.913370  560629 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0401 10:53:57.921419  560629 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0401 10:53:57.921448  560629 status.go:422] multinode-665938 apiserver status = Running (err=<nil>)
	I0401 10:53:57.921459  560629 status.go:257] multinode-665938 status: &{Name:multinode-665938 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0401 10:53:57.921477  560629 status.go:255] checking status of multinode-665938-m02 ...
	I0401 10:53:57.921796  560629 cli_runner.go:164] Run: docker container inspect multinode-665938-m02 --format={{.State.Status}}
	I0401 10:53:57.938035  560629 status.go:330] multinode-665938-m02 host status = "Running" (err=<nil>)
	I0401 10:53:57.938062  560629 host.go:66] Checking if "multinode-665938-m02" exists ...
	I0401 10:53:57.938371  560629 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-665938-m02
	I0401 10:53:57.957136  560629 host.go:66] Checking if "multinode-665938-m02" exists ...
	I0401 10:53:57.957456  560629 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 10:53:57.957505  560629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-665938-m02
	I0401 10:53:57.978731  560629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33316 SSHKeyPath:/home/jenkins/minikube-integration/18551-440344/.minikube/machines/multinode-665938-m02/id_rsa Username:docker}
	I0401 10:53:58.074688  560629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 10:53:58.087525  560629 status.go:257] multinode-665938-m02 status: &{Name:multinode-665938-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0401 10:53:58.087562  560629 status.go:255] checking status of multinode-665938-m03 ...
	I0401 10:53:58.087933  560629 cli_runner.go:164] Run: docker container inspect multinode-665938-m03 --format={{.State.Status}}
	I0401 10:53:58.103562  560629 status.go:330] multinode-665938-m03 host status = "Stopped" (err=<nil>)
	I0401 10:53:58.103588  560629 status.go:343] host is not running, skipping remaining checks
	I0401 10:53:58.103596  560629 status.go:257] multinode-665938-m03 status: &{Name:multinode-665938-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.27s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-665938 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-665938 node start m03 -v=7 --alsologtostderr: (8.466343834s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-665938 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.20s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (83.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-665938
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-665938
E0401 10:54:25.741775  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/addons-126557/client.crt: no such file or directory
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-665938: (24.954430834s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-665938 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-665938 --wait=true -v=8 --alsologtostderr: (58.54181302s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-665938
--- PASS: TestMultiNode/serial/RestartKeepsNodes (83.62s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-665938 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-665938 node delete m03: (5.115841659s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-665938 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.80s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-665938 stop
E0401 10:55:38.655583  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/functional-805196/client.crt: no such file or directory
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-665938 stop: (24.007936407s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-665938 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-665938 status: exit status 7 (101.344659ms)

                                                
                                                
-- stdout --
	multinode-665938
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-665938-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-665938 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-665938 status --alsologtostderr: exit status 7 (91.794572ms)

                                                
                                                
-- stdout --
	multinode-665938
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-665938-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0401 10:56:00.905816  568452 out.go:291] Setting OutFile to fd 1 ...
	I0401 10:56:00.906006  568452 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 10:56:00.906019  568452 out.go:304] Setting ErrFile to fd 2...
	I0401 10:56:00.906025  568452 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 10:56:00.906303  568452 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18551-440344/.minikube/bin
	I0401 10:56:00.906521  568452 out.go:298] Setting JSON to false
	I0401 10:56:00.906564  568452 mustload.go:65] Loading cluster: multinode-665938
	I0401 10:56:00.906649  568452 notify.go:220] Checking for updates...
	I0401 10:56:00.907020  568452 config.go:182] Loaded profile config "multinode-665938": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0401 10:56:00.907040  568452 status.go:255] checking status of multinode-665938 ...
	I0401 10:56:00.907588  568452 cli_runner.go:164] Run: docker container inspect multinode-665938 --format={{.State.Status}}
	I0401 10:56:00.923046  568452 status.go:330] multinode-665938 host status = "Stopped" (err=<nil>)
	I0401 10:56:00.923082  568452 status.go:343] host is not running, skipping remaining checks
	I0401 10:56:00.923091  568452 status.go:257] multinode-665938 status: &{Name:multinode-665938 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0401 10:56:00.923114  568452 status.go:255] checking status of multinode-665938-m02 ...
	I0401 10:56:00.923425  568452 cli_runner.go:164] Run: docker container inspect multinode-665938-m02 --format={{.State.Status}}
	I0401 10:56:00.939086  568452 status.go:330] multinode-665938-m02 host status = "Stopped" (err=<nil>)
	I0401 10:56:00.939111  568452 status.go:343] host is not running, skipping remaining checks
	I0401 10:56:00.939144  568452 status.go:257] multinode-665938-m02 status: &{Name:multinode-665938-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.20s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (54.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-665938 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-665938 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (53.660623513s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-665938 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (54.33s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (33.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-665938
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-665938-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-665938-m02 --driver=docker  --container-runtime=containerd: exit status 14 (87.205235ms)

                                                
                                                
-- stdout --
	* [multinode-665938-m02] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18551
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18551-440344/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18551-440344/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-665938-m02' is duplicated with machine name 'multinode-665938-m02' in profile 'multinode-665938'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-665938-m03 --driver=docker  --container-runtime=containerd
E0401 10:57:01.702989  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/functional-805196/client.crt: no such file or directory
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-665938-m03 --driver=docker  --container-runtime=containerd: (30.875421957s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-665938
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-665938: exit status 80 (330.898886ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-665938 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-665938-m03 already exists in multinode-665938-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-665938-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-665938-m03: (1.950714513s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (33.31s)

                                                
                                    
x
+
TestPreload (109.23s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-926524 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-926524 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m11.03201199s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-926524 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-926524 image pull gcr.io/k8s-minikube/busybox: (1.248616008s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-926524
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-926524: (12.195655648s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-926524 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-926524 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (21.891765019s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-926524 image list
helpers_test.go:175: Cleaning up "test-preload-926524" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-926524
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-926524: (2.491489371s)
--- PASS: TestPreload (109.23s)

                                                
                                    
x
+
TestScheduledStopUnix (106.75s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-908263 --memory=2048 --driver=docker  --container-runtime=containerd
E0401 10:59:25.741416  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/addons-126557/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-908263 --memory=2048 --driver=docker  --container-runtime=containerd: (30.135301129s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-908263 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-908263 -n scheduled-stop-908263
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-908263 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-908263 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-908263 -n scheduled-stop-908263
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-908263
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-908263 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0401 11:00:38.656251  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/functional-805196/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-908263
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-908263: exit status 7 (78.233494ms)

                                                
                                                
-- stdout --
	scheduled-stop-908263
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-908263 -n scheduled-stop-908263
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-908263 -n scheduled-stop-908263: exit status 7 (77.175438ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-908263" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-908263
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-908263: (4.976327599s)
--- PASS: TestScheduledStopUnix (106.75s)

                                                
                                    
x
+
TestInsufficientStorage (10.16s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-145062 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-145062 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (7.654984226s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3ce9a4a6-9573-4e5c-b04a-02a5bcbd577f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-145062] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3a13a537-0c3b-4dff-9d8a-2d32a3631858","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18551"}}
	{"specversion":"1.0","id":"405506fa-bc98-4c97-bd74-9a0026de8c45","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7ccc29db-a2cc-46d9-b2a9-49a277a43a40","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18551-440344/kubeconfig"}}
	{"specversion":"1.0","id":"89ce0e0d-6bc7-41e5-8bc4-4fcd9fd66096","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18551-440344/.minikube"}}
	{"specversion":"1.0","id":"c034ecdf-47f9-4745-bc0b-a33c30e79aba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"4ee1c3e9-2553-4845-a64c-da4506d7bf30","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"805f7d43-d60f-4302-bc4e-37eb9326a8da","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"b9882a3b-db0e-42c7-b7b2-085c3d1d161b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"c209f0e8-92b0-423d-b42c-a575d0ae5fd6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"c1107d32-20e2-4464-b301-f02e571f1406","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"edd42dbf-2f07-4b57-8625-c2a0aa79f965","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-145062\" primary control-plane node in \"insufficient-storage-145062\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"d5f5de63-3a83-444e-a82b-a17beecff16f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.43-1711559786-18485 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"bbcdbc8f-13f8-4a3d-97f5-4c8552b4df2a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"5724abd7-8266-4ee9-9585-724ad7bfee52","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-145062 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-145062 --output=json --layout=cluster: exit status 7 (293.009379ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-145062","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.33.0-beta.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-145062","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0401 11:01:16.574228  586087 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-145062" does not appear in /home/jenkins/minikube-integration/18551-440344/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-145062 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-145062 --output=json --layout=cluster: exit status 7 (279.111705ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-145062","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.33.0-beta.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-145062","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0401 11:01:16.855505  586142 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-145062" does not appear in /home/jenkins/minikube-integration/18551-440344/kubeconfig
	E0401 11:01:16.865327  586142 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/insufficient-storage-145062/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-145062" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-145062
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-145062: (1.931678653s)
--- PASS: TestInsufficientStorage (10.16s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (86.66s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1607099071 start -p running-upgrade-783644 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1607099071 start -p running-upgrade-783644 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (45.532127814s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-783644 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-783644 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (37.813130036s)
helpers_test.go:175: Cleaning up "running-upgrade-783644" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-783644
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-783644: (2.1771508s)
--- PASS: TestRunningBinaryUpgrade (86.66s)

                                                
                                    
x
+
TestKubernetesUpgrade (373.79s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-125794 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-125794 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (55.949770077s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-125794
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-125794: (1.611499722s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-125794 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-125794 status --format={{.Host}}: exit status 7 (234.86684ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-125794 --memory=2200 --kubernetes-version=v1.30.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-125794 --memory=2200 --kubernetes-version=v1.30.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m58.993989972s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-125794 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-125794 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-125794 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (113.416994ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-125794] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18551
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18551-440344/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18551-440344/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.0-beta.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-125794
	    minikube start -p kubernetes-upgrade-125794 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1257942 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-125794 --kubernetes-version=v1.30.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-125794 --memory=2200 --kubernetes-version=v1.30.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-125794 --memory=2200 --kubernetes-version=v1.30.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (14.387984778s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-125794" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-125794
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-125794: (2.367950173s)
--- PASS: TestKubernetesUpgrade (373.79s)

                                                
                                    
x
+
TestMissingContainerUpgrade (171s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.4178804884 start -p missing-upgrade-867192 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.4178804884 start -p missing-upgrade-867192 --memory=2200 --driver=docker  --container-runtime=containerd: (1m21.979565622s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-867192
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-867192: (10.327293493s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-867192
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-867192 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-867192 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m14.787286246s)
helpers_test.go:175: Cleaning up "missing-upgrade-867192" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-867192
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-867192: (2.705954243s)
--- PASS: TestMissingContainerUpgrade (171.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-036844 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-036844 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (92.033064ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-036844] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18551
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18551-440344/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18551-440344/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (40.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-036844 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-036844 --driver=docker  --container-runtime=containerd: (40.228063846s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-036844 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (40.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-036844 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-036844 --no-kubernetes --driver=docker  --container-runtime=containerd: (15.002977248s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-036844 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-036844 status -o json: exit status 2 (294.960492ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-036844","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-036844
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-036844: (2.070554079s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.73s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-036844 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-036844 --no-kubernetes --driver=docker  --container-runtime=containerd: (8.725044918s)
--- PASS: TestNoKubernetes/serial/Start (8.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-036844 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-036844 "sudo systemctl is-active --quiet service kubelet": exit status 1 (360.091487ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-036844
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-036844: (1.285360997s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.54s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-036844 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-036844 --driver=docker  --container-runtime=containerd: (7.541355385s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-036844 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-036844 "sudo systemctl is-active --quiet service kubelet": exit status 1 (348.981999ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.35s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.19s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.19s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (100.81s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.672751210 start -p stopped-upgrade-357663 --memory=2200 --vm-driver=docker  --container-runtime=containerd
E0401 11:04:25.747815  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/addons-126557/client.crt: no such file or directory
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.672751210 start -p stopped-upgrade-357663 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (43.492970623s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.672751210 -p stopped-upgrade-357663 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.672751210 -p stopped-upgrade-357663 stop: (19.902672444s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-357663 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0401 11:05:38.655546  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/functional-805196/client.crt: no such file or directory
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-357663 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (37.417152128s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (100.81s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.09s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-357663
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-357663: (1.089079791s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.09s)

                                                
                                    
x
+
TestPause/serial/Start (90.45s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-500281 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
E0401 11:07:28.791189  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/addons-126557/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-500281 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m30.45244108s)
--- PASS: TestPause/serial/Start (90.45s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.15s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-500281 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-500281 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.131239799s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.15s)

                                                
                                    
x
+
TestPause/serial/Pause (1.19s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-500281 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-500281 --alsologtostderr -v=5: (1.192904211s)
--- PASS: TestPause/serial/Pause (1.19s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.41s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-500281 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-500281 --output=json --layout=cluster: exit status 2 (408.074391ms)

                                                
                                                
-- stdout --
	{"Name":"pause-500281","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.0-beta.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-500281","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.41s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.95s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-500281 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.95s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.14s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-500281 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-500281 --alsologtostderr -v=5: (1.13791745s)
--- PASS: TestPause/serial/PauseAgain (1.14s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.96s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-500281 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-500281 --alsologtostderr -v=5: (2.959869027s)
--- PASS: TestPause/serial/DeletePaused (2.96s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.43s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-500281
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-500281: exit status 1 (12.613062ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-500281: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-140404 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-140404 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (257.079785ms)

                                                
                                                
-- stdout --
	* [false-140404] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18551
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18551-440344/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18551-440344/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0401 11:09:11.910994  626276 out.go:291] Setting OutFile to fd 1 ...
	I0401 11:09:11.911226  626276 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 11:09:11.911255  626276 out.go:304] Setting ErrFile to fd 2...
	I0401 11:09:11.911276  626276 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 11:09:11.911589  626276 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18551-440344/.minikube/bin
	I0401 11:09:11.912044  626276 out.go:298] Setting JSON to false
	I0401 11:09:11.913044  626276 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":10302,"bootTime":1711959450,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1056-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0401 11:09:11.913173  626276 start.go:139] virtualization:  
	I0401 11:09:11.916204  626276 out.go:177] * [false-140404] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	I0401 11:09:11.918501  626276 out.go:177]   - MINIKUBE_LOCATION=18551
	I0401 11:09:11.918575  626276 notify.go:220] Checking for updates...
	I0401 11:09:11.921521  626276 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 11:09:11.923871  626276 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18551-440344/kubeconfig
	I0401 11:09:11.926155  626276 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18551-440344/.minikube
	I0401 11:09:11.928098  626276 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0401 11:09:11.930181  626276 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 11:09:11.932513  626276 config.go:182] Loaded profile config "force-systemd-flag-180830": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.3
	I0401 11:09:11.932612  626276 driver.go:392] Setting default libvirt URI to qemu:///system
	I0401 11:09:11.958041  626276 docker.go:122] docker version: linux-26.0.0:Docker Engine - Community
	I0401 11:09:11.958179  626276 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 11:09:12.070898  626276 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:52 SystemTime:2024-04-01 11:09:12.058530336 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1056-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215101440 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.25.0]] Warnings:<nil>}}
	I0401 11:09:12.071010  626276 docker.go:295] overlay module found
	I0401 11:09:12.073561  626276 out.go:177] * Using the docker driver based on user configuration
	I0401 11:09:12.076095  626276 start.go:297] selected driver: docker
	I0401 11:09:12.076119  626276 start.go:901] validating driver "docker" against <nil>
	I0401 11:09:12.076135  626276 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 11:09:12.079104  626276 out.go:177] 
	W0401 11:09:12.081233  626276 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0401 11:09:12.083273  626276 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-140404 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-140404

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-140404

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-140404

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-140404

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-140404

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-140404

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-140404

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-140404

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-140404

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-140404

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140404"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140404"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140404"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-140404

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140404"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140404"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-140404" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-140404" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-140404" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-140404" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-140404" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-140404" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-140404" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-140404" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140404"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140404"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140404"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140404"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140404"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-140404" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-140404" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-140404" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140404"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140404"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140404"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140404"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140404"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-140404

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140404"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140404"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140404"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140404"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140404"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140404"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140404"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140404"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140404"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140404"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140404"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140404"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140404"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140404"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140404"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140404"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140404"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-140404"

                                                
                                                
----------------------- debugLogs end: false-140404 [took: 4.908383504s] --------------------------------
helpers_test.go:175: Cleaning up "false-140404" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-140404
--- PASS: TestNetworkPlugins/group/false (5.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (175.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-869040 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-869040 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m55.561968721s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (175.56s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (86.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-293463 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-293463 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.3: (1m26.136697586s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (86.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.65s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-869040 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [454355b7-d342-46c9-9d78-95ca07fc63ae] Pending
helpers_test.go:344: "busybox" [454355b7-d342-46c9-9d78-95ca07fc63ae] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0401 11:13:41.703898  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/functional-805196/client.crt: no such file or directory
helpers_test.go:344: "busybox" [454355b7-d342-46c9-9d78-95ca07fc63ae] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004655053s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-869040 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.65s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-869040 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-869040 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.200219869s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-869040 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-869040 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-869040 --alsologtostderr -v=3: (12.284246321s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-869040 -n old-k8s-version-869040
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-869040 -n old-k8s-version-869040: exit status 7 (134.188828ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-869040 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-293463 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c03c8d2e-3977-4159-b772-5a9341820d27] Pending
helpers_test.go:344: "busybox" [c03c8d2e-3977-4159-b772-5a9341820d27] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c03c8d2e-3977-4159-b772-5a9341820d27] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.006042145s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-293463 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.51s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.56s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-293463 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-293463 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.390342024s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-293463 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.56s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-293463 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-293463 --alsologtostderr -v=3: (12.207974527s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-293463 -n default-k8s-diff-port-293463
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-293463 -n default-k8s-diff-port-293463: exit status 7 (81.33548ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-293463 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (266.82s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-293463 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.3
E0401 11:15:38.656167  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/functional-805196/client.crt: no such file or directory
E0401 11:19:25.741527  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/addons-126557/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-293463 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.3: (4m26.412583706s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-293463 -n default-k8s-diff-port-293463
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (266.82s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-wnphx" [9be91511-2744-4e6c-8012-77ed9a783d22] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004043019s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-wnphx" [9be91511-2744-4e6c-8012-77ed9a783d22] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004357392s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-293463 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-293463 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-293463 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-293463 -n default-k8s-diff-port-293463
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-293463 -n default-k8s-diff-port-293463: exit status 2 (318.110873ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-293463 -n default-k8s-diff-port-293463
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-293463 -n default-k8s-diff-port-293463: exit status 2 (334.143816ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-293463 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-293463 -n default-k8s-diff-port-293463
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-293463 -n default-k8s-diff-port-293463
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (87.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-300026 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-300026 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.3: (1m27.256510594s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (87.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-fllgf" [5e87fb3a-f46c-4139-86e1-9189ca92587e] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003695348s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-fllgf" [5e87fb3a-f46c-4139-86e1-9189ca92587e] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004923667s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-869040 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-869040 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-869040 --alsologtostderr -v=1
E0401 11:20:38.656097  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/functional-805196/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-869040 --alsologtostderr -v=1: (1.087386006s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-869040 -n old-k8s-version-869040
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-869040 -n old-k8s-version-869040: exit status 2 (332.035672ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-869040 -n old-k8s-version-869040
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-869040 -n old-k8s-version-869040: exit status 2 (350.548419ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-869040 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-869040 -n old-k8s-version-869040
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-869040 -n old-k8s-version-869040
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (70.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-919767 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.0-beta.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-919767 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.0-beta.0: (1m10.120244781s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (70.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (7.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-300026 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9f5069db-60b3-4ab8-9799-bec87a5e1f28] Pending
helpers_test.go:344: "busybox" [9f5069db-60b3-4ab8-9799-bec87a5e1f28] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9f5069db-60b3-4ab8-9799-bec87a5e1f28] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 7.003623638s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-300026 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (7.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-300026 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-300026 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.044016996s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-300026 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-300026 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-300026 --alsologtostderr -v=3: (12.290633201s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-300026 -n embed-certs-300026
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-300026 -n embed-certs-300026: exit status 7 (78.473557ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-300026 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (277.7s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-300026 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-300026 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.3: (4m37.278093995s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-300026 -n embed-certs-300026
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (277.70s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-919767 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c53bedd0-4a9a-42f4-8342-b6ad2c5ccb84] Pending
helpers_test.go:344: "busybox" [c53bedd0-4a9a-42f4-8342-b6ad2c5ccb84] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c53bedd0-4a9a-42f4-8342-b6ad2c5ccb84] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.007792882s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-919767 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.48s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-919767 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-919767 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.56s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-919767 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-919767 --alsologtostderr -v=3: (12.559996247s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.56s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-919767 -n no-preload-919767
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-919767 -n no-preload-919767: exit status 7 (94.394012ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-919767 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (281.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-919767 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.0-beta.0
E0401 11:23:39.591085  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/old-k8s-version-869040/client.crt: no such file or directory
E0401 11:23:39.596844  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/old-k8s-version-869040/client.crt: no such file or directory
E0401 11:23:39.607156  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/old-k8s-version-869040/client.crt: no such file or directory
E0401 11:23:39.627501  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/old-k8s-version-869040/client.crt: no such file or directory
E0401 11:23:39.667942  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/old-k8s-version-869040/client.crt: no such file or directory
E0401 11:23:39.748208  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/old-k8s-version-869040/client.crt: no such file or directory
E0401 11:23:39.908583  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/old-k8s-version-869040/client.crt: no such file or directory
E0401 11:23:40.229232  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/old-k8s-version-869040/client.crt: no such file or directory
E0401 11:23:40.869991  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/old-k8s-version-869040/client.crt: no such file or directory
E0401 11:23:42.150752  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/old-k8s-version-869040/client.crt: no such file or directory
E0401 11:23:44.711602  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/old-k8s-version-869040/client.crt: no such file or directory
E0401 11:23:49.832077  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/old-k8s-version-869040/client.crt: no such file or directory
E0401 11:24:00.072767  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/old-k8s-version-869040/client.crt: no such file or directory
E0401 11:24:08.792267  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/addons-126557/client.crt: no such file or directory
E0401 11:24:20.553483  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/old-k8s-version-869040/client.crt: no such file or directory
E0401 11:24:25.742094  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/addons-126557/client.crt: no such file or directory
E0401 11:24:45.897091  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/default-k8s-diff-port-293463/client.crt: no such file or directory
E0401 11:24:45.902506  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/default-k8s-diff-port-293463/client.crt: no such file or directory
E0401 11:24:45.912756  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/default-k8s-diff-port-293463/client.crt: no such file or directory
E0401 11:24:45.933015  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/default-k8s-diff-port-293463/client.crt: no such file or directory
E0401 11:24:45.973299  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/default-k8s-diff-port-293463/client.crt: no such file or directory
E0401 11:24:46.053574  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/default-k8s-diff-port-293463/client.crt: no such file or directory
E0401 11:24:46.213738  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/default-k8s-diff-port-293463/client.crt: no such file or directory
E0401 11:24:46.534263  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/default-k8s-diff-port-293463/client.crt: no such file or directory
E0401 11:24:47.174744  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/default-k8s-diff-port-293463/client.crt: no such file or directory
E0401 11:24:48.455819  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/default-k8s-diff-port-293463/client.crt: no such file or directory
E0401 11:24:51.016646  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/default-k8s-diff-port-293463/client.crt: no such file or directory
E0401 11:24:56.137498  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/default-k8s-diff-port-293463/client.crt: no such file or directory
E0401 11:25:01.513697  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/old-k8s-version-869040/client.crt: no such file or directory
E0401 11:25:06.378259  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/default-k8s-diff-port-293463/client.crt: no such file or directory
E0401 11:25:26.859003  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/default-k8s-diff-port-293463/client.crt: no such file or directory
E0401 11:25:38.656420  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/functional-805196/client.crt: no such file or directory
E0401 11:26:07.819237  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/default-k8s-diff-port-293463/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-919767 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.0-beta.0: (4m40.525772998s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-919767 -n no-preload-919767
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (281.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-mhlhz" [2f8a6524-5256-4a04-bf3c-7b2b5de6c3de] Running
E0401 11:26:23.434210  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/old-k8s-version-869040/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004529162s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-mhlhz" [2f8a6524-5256-4a04-bf3c-7b2b5de6c3de] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003550761s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-300026 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-300026 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-300026 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-300026 -n embed-certs-300026
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-300026 -n embed-certs-300026: exit status 2 (336.639821ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-300026 -n embed-certs-300026
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-300026 -n embed-certs-300026: exit status 2 (343.205973ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-300026 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-300026 -n embed-certs-300026
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-300026 -n embed-certs-300026
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (49.87s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-963635 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.0-beta.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-963635 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.0-beta.0: (49.87002635s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (49.87s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-d8jtx" [16e070fd-8655-46e3-ba2a-869ba3c5a184] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003207791s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-d8jtx" [16e070fd-8655-46e3-ba2a-869ba3c5a184] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00448273s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-919767 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-919767 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-919767 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p no-preload-919767 --alsologtostderr -v=1: (1.013532202s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-919767 -n no-preload-919767
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-919767 -n no-preload-919767: exit status 2 (357.116607ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-919767 -n no-preload-919767
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-919767 -n no-preload-919767: exit status 2 (413.369509ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-919767 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-919767 -n no-preload-919767
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-919767 -n no-preload-919767
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (93.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-140404 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-140404 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m33.635420144s)
--- PASS: TestNetworkPlugins/group/auto/Start (93.64s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-963635 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-963635 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.327176989s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-963635 --alsologtostderr -v=3
E0401 11:27:29.740414  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/default-k8s-diff-port-293463/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-963635 --alsologtostderr -v=3: (1.370989943s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-963635 -n newest-cni-963635
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-963635 -n newest-cni-963635: exit status 7 (186.951842ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-963635 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (22.8s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-963635 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.0-beta.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-963635 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.0-beta.0: (22.372580286s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-963635 -n newest-cni-963635
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (22.80s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-963635 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.72s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-963635 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-963635 -n newest-cni-963635
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-963635 -n newest-cni-963635: exit status 2 (383.90695ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-963635 -n newest-cni-963635
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-963635 -n newest-cni-963635: exit status 2 (372.583377ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-963635 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-963635 -n newest-cni-963635
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-963635 -n newest-cni-963635
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.72s)
E0401 11:33:39.591244  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/old-k8s-version-869040/client.crt: no such file or directory
E0401 11:33:50.795241  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/auto-140404/client.crt: no such file or directory
E0401 11:33:50.800450  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/auto-140404/client.crt: no such file or directory
E0401 11:33:50.810690  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/auto-140404/client.crt: no such file or directory
E0401 11:33:50.830940  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/auto-140404/client.crt: no such file or directory
E0401 11:33:50.871191  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/auto-140404/client.crt: no such file or directory
E0401 11:33:50.951439  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/auto-140404/client.crt: no such file or directory
E0401 11:33:51.111879  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/auto-140404/client.crt: no such file or directory
E0401 11:33:51.432773  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/auto-140404/client.crt: no such file or directory
E0401 11:33:52.073826  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/auto-140404/client.crt: no such file or directory
E0401 11:33:53.354501  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/auto-140404/client.crt: no such file or directory
E0401 11:33:55.914844  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/auto-140404/client.crt: no such file or directory
E0401 11:34:01.035700  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/auto-140404/client.crt: no such file or directory
E0401 11:34:11.276633  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/auto-140404/client.crt: no such file or directory
E0401 11:34:25.741790  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/addons-126557/client.crt: no such file or directory
E0401 11:34:26.548632  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/kindnet-140404/client.crt: no such file or directory
E0401 11:34:26.554004  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/kindnet-140404/client.crt: no such file or directory
E0401 11:34:26.564338  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/kindnet-140404/client.crt: no such file or directory
E0401 11:34:26.584637  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/kindnet-140404/client.crt: no such file or directory
E0401 11:34:26.625153  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/kindnet-140404/client.crt: no such file or directory
E0401 11:34:26.705529  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/kindnet-140404/client.crt: no such file or directory
E0401 11:34:26.865883  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/kindnet-140404/client.crt: no such file or directory
E0401 11:34:27.186678  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/kindnet-140404/client.crt: no such file or directory
E0401 11:34:27.826836  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/kindnet-140404/client.crt: no such file or directory
E0401 11:34:29.107868  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/kindnet-140404/client.crt: no such file or directory
E0401 11:34:31.668220  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/kindnet-140404/client.crt: no such file or directory
E0401 11:34:31.757425  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/auto-140404/client.crt: no such file or directory
E0401 11:34:36.788800  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/kindnet-140404/client.crt: no such file or directory
E0401 11:34:38.815060  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/no-preload-919767/client.crt: no such file or directory
E0401 11:34:45.896901  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/default-k8s-diff-port-293463/client.crt: no such file or directory
E0401 11:34:47.029176  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/kindnet-140404/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (86.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-140404 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
E0401 11:28:39.591272  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/old-k8s-version-869040/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-140404 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m26.893280953s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (86.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-140404 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-140404 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-8p48w" [a8b0d2ea-68a2-4379-af80-ef2467ade1b0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-8p48w" [a8b0d2ea-68a2-4379-af80-ef2467ade1b0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.004508381s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-140404 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-140404 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-140404 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (77.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-140404 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
E0401 11:29:25.742223  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/addons-126557/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-140404 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m17.080925734s)
--- PASS: TestNetworkPlugins/group/calico/Start (77.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-zcqwl" [ea17be95-8483-43c4-9f0b-c336ded5d9c3] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00543022s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-140404 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-140404 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-qtzxs" [e730ebf9-48bc-47ba-a143-2c126e6c2971] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-qtzxs" [e730ebf9-48bc-47ba-a143-2c126e6c2971] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.003612082s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-140404 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-140404 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-140404 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (66.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-140404 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
E0401 11:30:13.580632  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/default-k8s-diff-port-293463/client.crt: no such file or directory
E0401 11:30:21.704166  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/functional-805196/client.crt: no such file or directory
E0401 11:30:38.655970  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/functional-805196/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-140404 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (1m6.08831683s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (66.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-2w64s" [f8dd8f79-b866-4ce6-be7c-96f282f492dd] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.007874958s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-140404 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-140404 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-6z5sg" [1cce4dae-4a74-44fd-9f26-92d09f4e6a2d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-6z5sg" [1cce4dae-4a74-44fd-9f26-92d09f4e6a2d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.009306923s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-140404 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-140404 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-140404 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-140404 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-140404 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-27kxd" [f8371e57-234d-435c-b961-0c913f6a9d50] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-27kxd" [f8371e57-234d-435c-b961-0c913f6a9d50] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004552044s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (96.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-140404 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-140404 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m36.805244707s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (96.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-140404 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-140404 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-140404 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (63.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-140404 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E0401 11:31:54.971389  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/no-preload-919767/client.crt: no such file or directory
E0401 11:31:54.976601  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/no-preload-919767/client.crt: no such file or directory
E0401 11:31:54.986820  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/no-preload-919767/client.crt: no such file or directory
E0401 11:31:55.007091  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/no-preload-919767/client.crt: no such file or directory
E0401 11:31:55.047336  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/no-preload-919767/client.crt: no such file or directory
E0401 11:31:55.127577  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/no-preload-919767/client.crt: no such file or directory
E0401 11:31:55.287882  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/no-preload-919767/client.crt: no such file or directory
E0401 11:31:55.608372  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/no-preload-919767/client.crt: no such file or directory
E0401 11:31:56.249181  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/no-preload-919767/client.crt: no such file or directory
E0401 11:31:57.529708  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/no-preload-919767/client.crt: no such file or directory
E0401 11:32:00.090196  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/no-preload-919767/client.crt: no such file or directory
E0401 11:32:05.211012  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/no-preload-919767/client.crt: no such file or directory
E0401 11:32:15.453107  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/no-preload-919767/client.crt: no such file or directory
E0401 11:32:35.933885  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/no-preload-919767/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-140404 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (1m3.665918203s)
--- PASS: TestNetworkPlugins/group/flannel/Start (63.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-rtr9c" [8fc13bc3-c7cb-45b9-a4ad-1ff91442535a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.0039176s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-140404 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-140404 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-k5s2n" [19aced3c-69ea-4248-ba7a-012b41055bbd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-k5s2n" [19aced3c-69ea-4248-ba7a-012b41055bbd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.003905272s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-140404 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-140404 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-k5d5w" [09e91d46-e092-4d76-9d53-5d0d8d1247cd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-k5d5w" [09e91d46-e092-4d76-9d53-5d0d8d1247cd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004259972s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-140404 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-140404 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-140404 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-140404 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-140404 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-140404 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (88.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-140404 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-140404 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m28.647555008s)
--- PASS: TestNetworkPlugins/group/bridge/Start (88.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-140404 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-140404 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-msv9m" [ceec6837-160e-4705-8c2c-cb379267c474] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-msv9m" [ceec6837-160e-4705-8c2c-cb379267c474] Running
E0401 11:35:07.510260  445754 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18551-440344/.minikube/profiles/kindnet-140404/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004199883s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-140404 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-140404 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-140404 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    

Test skip (31/335)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.57s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-971411 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-971411" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-971411
--- SKIP: TestDownloadOnlyKic (0.57s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:444: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-382995" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-382995
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-140404 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-140404

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-140404

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-140404

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-140404

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-140404

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-140404

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-140404

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-140404

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-140404

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-140404

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140404"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140404"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140404"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-140404

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140404"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140404"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-140404" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-140404" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-140404" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-140404" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-140404" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-140404" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-140404" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-140404" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140404"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140404"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140404"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140404"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140404"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-140404" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-140404" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-140404" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140404"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140404"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140404"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140404"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140404"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-140404

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140404"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140404"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140404"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140404"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140404"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140404"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140404"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140404"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140404"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140404"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140404"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140404"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140404"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140404"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140404"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140404"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140404"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-140404"

                                                
                                                
----------------------- debugLogs end: kubenet-140404 [took: 4.800935475s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-140404" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-140404
--- SKIP: TestNetworkPlugins/group/kubenet (4.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-140404 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-140404

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-140404

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-140404

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-140404

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-140404

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-140404

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-140404

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-140404

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-140404

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-140404

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140404"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140404"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140404"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-140404

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140404"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140404"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-140404" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-140404" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-140404" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-140404" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-140404" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-140404" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-140404" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-140404" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140404"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140404"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140404"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140404"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140404"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-140404

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-140404

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-140404" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-140404" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-140404

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-140404

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-140404" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-140404" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-140404" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-140404" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-140404" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140404"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140404"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140404"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140404"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140404"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-140404

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140404"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140404"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140404"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140404"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140404"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140404"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140404"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140404"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140404"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140404"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140404"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140404"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140404"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140404"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140404"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140404"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140404"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-140404" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-140404"

                                                
                                                
----------------------- debugLogs end: cilium-140404 [took: 5.706308759s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-140404" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-140404
--- SKIP: TestNetworkPlugins/group/cilium (5.90s)

                                                
                                    
Copied to clipboard